2025-05-14 13:47:21.534117 | Job console starting 2025-05-14 13:47:21.562484 | Updating git repos 2025-05-14 13:47:21.633791 | Cloning repos into workspace 2025-05-14 13:47:21.814442 | Restoring repo states 2025-05-14 13:47:21.842237 | Merging changes 2025-05-14 13:47:21.842277 | Checking out repos 2025-05-14 13:47:22.096438 | Preparing playbooks 2025-05-14 13:47:22.761497 | Running Ansible setup 2025-05-14 13:47:27.213522 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-14 13:47:27.968113 | 2025-05-14 13:47:27.968286 | PLAY [Base pre] 2025-05-14 13:47:27.986269 | 2025-05-14 13:47:27.986408 | TASK [Setup log path fact] 2025-05-14 13:47:28.016682 | orchestrator | ok 2025-05-14 13:47:28.034760 | 2025-05-14 13:47:28.034928 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 13:47:28.083288 | orchestrator | ok 2025-05-14 13:47:28.099135 | 2025-05-14 13:47:28.099275 | TASK [emit-job-header : Print job information] 2025-05-14 13:47:28.154038 | # Job Information 2025-05-14 13:47:28.154316 | Ansible Version: 2.16.14 2025-05-14 13:47:28.154372 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-14 13:47:28.154412 | Pipeline: post 2025-05-14 13:47:28.154440 | Executor: 521e9411259a 2025-05-14 13:47:28.154464 | Triggered by: https://github.com/osism/testbed/commit/af52aa70f3094e0a0d3d5fd7c99d6db62851a2e8 2025-05-14 13:47:28.154500 | Event ID: f0e2c994-30c9-11f0-814d-9696b28d4546 2025-05-14 13:47:28.162957 | 2025-05-14 13:47:28.163131 | LOOP [emit-job-header : Print node information] 2025-05-14 13:47:28.281286 | orchestrator | ok: 2025-05-14 13:47:28.281498 | orchestrator | # Node Information 2025-05-14 13:47:28.281532 | orchestrator | Inventory Hostname: orchestrator 2025-05-14 13:47:28.281558 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-14 13:47:28.281581 | orchestrator | Username: zuul-testbed06 2025-05-14 13:47:28.281602 | orchestrator | Distro: Debian 12.10 2025-05-14 13:47:28.281628 | orchestrator | Provider: static-testbed 2025-05-14 13:47:28.281649 | orchestrator | Region: 2025-05-14 13:47:28.281688 | orchestrator | Label: testbed-orchestrator 2025-05-14 13:47:28.281709 | orchestrator | Product Name: OpenStack Nova 2025-05-14 13:47:28.281728 | orchestrator | Interface IP: 81.163.193.140 2025-05-14 13:47:28.311820 | 2025-05-14 13:47:28.312027 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-14 13:47:28.814317 | orchestrator -> localhost | changed 2025-05-14 13:47:28.836752 | 2025-05-14 13:47:28.837055 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-14 13:47:29.920364 | orchestrator -> localhost | changed 2025-05-14 13:47:29.935242 | 2025-05-14 13:47:29.935381 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-14 13:47:30.242998 | orchestrator -> localhost | ok 2025-05-14 13:47:30.250460 | 2025-05-14 13:47:30.250599 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-14 13:47:30.280265 | orchestrator | ok 2025-05-14 13:47:30.297148 | orchestrator | included: /var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-14 13:47:30.305702 | 2025-05-14 13:47:30.305816 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-14 13:47:31.739959 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-14 13:47:31.740446 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/94308d9cc51747de973250c0c0b71a8a_id_rsa 2025-05-14 13:47:31.740546 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/94308d9cc51747de973250c0c0b71a8a_id_rsa.pub 2025-05-14 13:47:31.740614 | orchestrator -> localhost | The key fingerprint is: 2025-05-14 13:47:31.740697 | orchestrator -> localhost | SHA256:5hitSi6706phJNR/GoS7laWjkrXLtKm0c4k/lX1Qvc0 zuul-build-sshkey 2025-05-14 13:47:31.740757 | orchestrator -> localhost | The key's randomart image is: 2025-05-14 13:47:31.740838 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-14 13:47:31.740896 | orchestrator -> localhost | | . | 2025-05-14 13:47:31.740950 | orchestrator -> localhost | | . . . . | 2025-05-14 13:47:31.741001 | orchestrator -> localhost | | . o . .. + | 2025-05-14 13:47:31.741051 | orchestrator -> localhost | |. + +o . E | 2025-05-14 13:47:31.741100 | orchestrator -> localhost | |.. o B+.S | 2025-05-14 13:47:31.741172 | orchestrator -> localhost | |o o =o=B . | 2025-05-14 13:47:31.741228 | orchestrator -> localhost | |.=.*+.o o | 2025-05-14 13:47:31.741280 | orchestrator -> localhost | |o+B*=. | 2025-05-14 13:47:31.741332 | orchestrator -> localhost | |.+O%+ | 2025-05-14 13:47:31.741383 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-14 13:47:31.741506 | orchestrator -> localhost | ok: Runtime: 0:00:00.892144 2025-05-14 13:47:31.757861 | 2025-05-14 13:47:31.758039 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-14 13:47:31.794529 | orchestrator | ok 2025-05-14 13:47:31.808626 | orchestrator | included: /var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-14 13:47:31.818462 | 2025-05-14 13:47:31.818593 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-14 13:47:31.842973 | orchestrator | skipping: Conditional result was False 2025-05-14 13:47:31.851454 | 2025-05-14 13:47:31.851591 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-14 13:47:32.450990 | orchestrator | changed 2025-05-14 13:47:32.460973 | 2025-05-14 13:47:32.461131 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-14 13:47:32.729356 | orchestrator | ok 2025-05-14 13:47:32.736738 | 2025-05-14 13:47:32.736875 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-14 13:47:33.135489 | orchestrator | ok 2025-05-14 13:47:33.142545 | 2025-05-14 13:47:33.142700 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-14 13:47:33.555048 | orchestrator | ok 2025-05-14 13:47:33.563028 | 2025-05-14 13:47:33.563165 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-14 13:47:33.598121 | orchestrator | skipping: Conditional result was False 2025-05-14 13:47:33.612794 | 2025-05-14 13:47:33.612965 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-14 13:47:34.086434 | orchestrator -> localhost | changed 2025-05-14 13:47:34.106937 | 2025-05-14 13:47:34.107184 | TASK [add-build-sshkey : Add back temp key] 2025-05-14 13:47:34.466454 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/94308d9cc51747de973250c0c0b71a8a_id_rsa (zuul-build-sshkey) 2025-05-14 13:47:34.466757 | orchestrator -> localhost | ok: Runtime: 0:00:00.018301 2025-05-14 13:47:34.474403 | 2025-05-14 13:47:34.474522 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-14 13:47:34.917284 | orchestrator | ok 2025-05-14 13:47:34.926878 | 2025-05-14 13:47:34.927007 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-14 13:47:34.963791 | orchestrator | skipping: Conditional result was False 2025-05-14 13:47:35.028899 | 2025-05-14 13:47:35.029059 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-14 13:47:35.567128 | orchestrator | ok 2025-05-14 13:47:35.580957 | 2025-05-14 13:47:35.581101 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-14 13:47:35.622717 | orchestrator | ok 2025-05-14 13:47:35.632114 | 2025-05-14 13:47:35.632240 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-14 13:47:35.940097 | orchestrator -> localhost | ok 2025-05-14 13:47:35.947973 | 2025-05-14 13:47:35.948090 | TASK [validate-host : Collect information about the host] 2025-05-14 13:47:37.158327 | orchestrator | ok 2025-05-14 13:47:37.173112 | 2025-05-14 13:47:37.173249 | TASK [validate-host : Sanitize hostname] 2025-05-14 13:47:37.238823 | orchestrator | ok 2025-05-14 13:47:37.247983 | 2025-05-14 13:47:37.248141 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-14 13:47:37.829414 | orchestrator -> localhost | changed 2025-05-14 13:47:37.837077 | 2025-05-14 13:47:37.837205 | TASK [validate-host : Collect information about zuul worker] 2025-05-14 13:47:38.283877 | orchestrator | ok 2025-05-14 13:47:38.289795 | 2025-05-14 13:47:38.289921 | TASK [validate-host : Write out all zuul information for each host] 2025-05-14 13:47:38.874464 | orchestrator -> localhost | changed 2025-05-14 13:47:38.895339 | 2025-05-14 13:47:38.895549 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-14 13:47:39.191573 | orchestrator | ok 2025-05-14 13:47:39.209909 | 2025-05-14 13:47:39.210124 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-14 13:48:14.969544 | orchestrator | changed: 2025-05-14 13:48:14.969837 | orchestrator | .d..t...... src/ 2025-05-14 13:48:14.969882 | orchestrator | .d..t...... src/github.com/ 2025-05-14 13:48:14.969908 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-14 13:48:14.969930 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-14 13:48:14.969951 | orchestrator | RedHat.yml 2025-05-14 13:48:14.982236 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-14 13:48:14.982255 | orchestrator | RedHat.yml 2025-05-14 13:48:14.982311 | orchestrator | = 2.2.0"... 2025-05-14 13:48:27.925149 | orchestrator | 13:48:27.924 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-14 13:48:28.003884 | orchestrator | 13:48:28.003 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-14 13:48:29.033398 | orchestrator | 13:48:29.033 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-14 13:48:33.252944 | orchestrator | 13:48:33.252 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 13:48:34.154800 | orchestrator | 13:48:34.154 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-14 13:48:35.058863 | orchestrator | 13:48:35.058 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 13:48:36.024670 | orchestrator | 13:48:36.024 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-14 13:48:37.057846 | orchestrator | 13:48:37.057 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-14 13:48:37.058109 | orchestrator | 13:48:37.057 STDOUT terraform: Providers are signed by their developers. 2025-05-14 13:48:37.058122 | orchestrator | 13:48:37.058 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-14 13:48:37.058128 | orchestrator | 13:48:37.058 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-14 13:48:37.058395 | orchestrator | 13:48:37.058 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-14 13:48:37.058409 | orchestrator | 13:48:37.058 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-14 13:48:37.058417 | orchestrator | 13:48:37.058 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-14 13:48:37.058421 | orchestrator | 13:48:37.058 STDOUT terraform: you run "tofu init" in the future. 2025-05-14 13:48:37.058980 | orchestrator | 13:48:37.058 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-14 13:48:37.059325 | orchestrator | 13:48:37.059 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-14 13:48:37.059335 | orchestrator | 13:48:37.059 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-14 13:48:37.059340 | orchestrator | 13:48:37.059 STDOUT terraform: should now work. 2025-05-14 13:48:37.059344 | orchestrator | 13:48:37.059 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-14 13:48:37.059348 | orchestrator | 13:48:37.059 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-14 13:48:37.059353 | orchestrator | 13:48:37.059 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-14 13:48:37.234982 | orchestrator | 13:48:37.234 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 13:48:37.777890 | orchestrator | 13:48:37.777 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-14 13:48:37.778046 | orchestrator | 13:48:37.777 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-14 13:48:37.778062 | orchestrator | 13:48:37.777 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-14 13:48:37.778071 | orchestrator | 13:48:37.777 STDOUT terraform: for this configuration. 2025-05-14 13:48:38.017287 | orchestrator | 13:48:38.017 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 13:48:38.108763 | orchestrator | 13:48:38.108 STDOUT terraform: ci.auto.tfvars 2025-05-14 13:48:38.451834 | orchestrator | 13:48:38.451 STDOUT terraform: default_custom.tf 2025-05-14 13:48:38.651336 | orchestrator | 13:48:38.651 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 13:48:39.578300 | orchestrator | 13:48:39.578 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-14 13:48:40.382842 | orchestrator | 13:48:40.382 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-14 13:48:40.560654 | orchestrator | 13:48:40.560 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-14 13:48:40.560740 | orchestrator | 13:48:40.560 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-14 13:48:40.560752 | orchestrator | 13:48:40.560 STDOUT terraform:  + create 2025-05-14 13:48:40.560819 | orchestrator | 13:48:40.560 STDOUT terraform:  <= read (data resources) 2025-05-14 13:48:40.560903 | orchestrator | 13:48:40.560 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-14 13:48:40.561056 | orchestrator | 13:48:40.560 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-14 13:48:40.561143 | orchestrator | 13:48:40.561 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 13:48:40.561215 | orchestrator | 13:48:40.561 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-14 13:48:40.561295 | orchestrator | 13:48:40.561 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 13:48:40.561374 | orchestrator | 13:48:40.561 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 13:48:40.561526 | orchestrator | 13:48:40.561 STDOUT terraform:  + file = (known after apply) 2025-05-14 13:48:40.561598 | orchestrator | 13:48:40.561 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.561680 | orchestrator | 13:48:40.561 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.561758 | orchestrator | 13:48:40.561 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 13:48:40.561836 | orchestrator | 13:48:40.561 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 13:48:40.561889 | orchestrator | 13:48:40.561 STDOUT terraform:  + most_recent = true 2025-05-14 13:48:40.561968 | orchestrator | 13:48:40.561 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.562130 | orchestrator | 13:48:40.561 STDOUT terraform:  + protected = (known after apply) 2025-05-14 13:48:40.562251 | orchestrator | 13:48:40.562 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.562330 | orchestrator | 13:48:40.562 STDOUT terraform:  + schema = (known after apply) 2025-05-14 13:48:40.562446 | orchestrator | 13:48:40.562 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 13:48:40.562555 | orchestrator | 13:48:40.562 STDOUT terraform:  + tags = (known after apply) 2025-05-14 13:48:40.562635 | orchestrator | 13:48:40.562 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 13:48:40.562672 | orchestrator | 13:48:40.562 STDOUT terraform:  } 2025-05-14 13:48:40.562805 | orchestrator | 13:48:40.562 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-14 13:48:40.562887 | orchestrator | 13:48:40.562 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 13:48:40.563006 | orchestrator | 13:48:40.562 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-14 13:48:40.563083 | orchestrator | 13:48:40.563 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 13:48:40.563162 | orchestrator | 13:48:40.563 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 13:48:40.563264 | orchestrator | 13:48:40.563 STDOUT terraform:  + file = (known after apply) 2025-05-14 13:48:40.563345 | orchestrator | 13:48:40.563 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.563460 | orchestrator | 13:48:40.563 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.563528 | orchestrator | 13:48:40.563 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 13:48:40.563609 | orchestrator | 13:48:40.563 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 13:48:40.563661 | orchestrator | 13:48:40.563 STDOUT terraform:  + most_recent = true 2025-05-14 13:48:40.563739 | orchestrator | 13:48:40.563 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.563839 | orchestrator | 13:48:40.563 STDOUT terraform:  + protected = (known after apply) 2025-05-14 13:48:40.563918 | orchestrator | 13:48:40.563 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.563997 | orchestrator | 13:48:40.563 STDOUT terraform:  + schema = (known after apply) 2025-05-14 13:48:40.564084 | orchestrator | 13:48:40.563 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 13:48:40.564172 | orchestrator | 13:48:40.564 STDOUT terraform:  + tags = (known after apply) 2025-05-14 13:48:40.564251 | orchestrator | 13:48:40.564 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 13:48:40.564289 | orchestrator | 13:48:40.564 STDOUT terraform:  } 2025-05-14 13:48:40.564372 | orchestrator | 13:48:40.564 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-14 13:48:40.564488 | orchestrator | 13:48:40.564 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-14 13:48:40.564587 | orchestrator | 13:48:40.564 STDOUT terraform:  + content = (known after apply) 2025-05-14 13:48:40.564687 | orchestrator | 13:48:40.564 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 13:48:40.564835 | orchestrator | 13:48:40.564 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 13:48:40.564935 | orchestrator | 13:48:40.564 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 13:48:40.565033 | orchestrator | 13:48:40.564 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 13:48:40.565131 | orchestrator | 13:48:40.565 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 13:48:40.565229 | orchestrator | 13:48:40.565 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 13:48:40.565295 | orchestrator | 13:48:40.565 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 13:48:40.565361 | orchestrator | 13:48:40.565 STDOUT terraform:  + file_permission = "0644" 2025-05-14 13:48:40.565500 | orchestrator | 13:48:40.565 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-14 13:48:40.565589 | orchestrator | 13:48:40.565 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.565625 | orchestrator | 13:48:40.565 STDOUT terraform:  } 2025-05-14 13:48:40.565702 | orchestrator | 13:48:40.565 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-14 13:48:40.565771 | orchestrator | 13:48:40.565 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-14 13:48:40.565871 | orchestrator | 13:48:40.565 STDOUT terraform:  + content = (known after apply) 2025-05-14 13:48:40.565978 | orchestrator | 13:48:40.565 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 13:48:40.566106 | orchestrator | 13:48:40.565 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 13:48:40.566202 | orchestrator | 13:48:40.566 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 13:48:40.566300 | orchestrator | 13:48:40.566 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 13:48:40.566426 | orchestrator | 13:48:40.566 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 13:48:40.566539 | orchestrator | 13:48:40.566 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 13:48:40.566607 | orchestrator | 13:48:40.566 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 13:48:40.566673 | orchestrator | 13:48:40.566 STDOUT terraform:  + file_permission = "0644" 2025-05-14 13:48:40.566817 | orchestrator | 13:48:40.566 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-14 13:48:40.566922 | orchestrator | 13:48:40.566 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.566959 | orchestrator | 13:48:40.566 STDOUT terraform:  } 2025-05-14 13:48:40.567025 | orchestrator | 13:48:40.566 STDOUT terraform:  # local_file.inventory will be created 2025-05-14 13:48:40.567094 | orchestrator | 13:48:40.567 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-14 13:48:40.567192 | orchestrator | 13:48:40.567 STDOUT terraform:  + content = (known after apply) 2025-05-14 13:48:40.567288 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 13:48:40.567483 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 13:48:40.567583 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 13:48:40.567712 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 13:48:40.567795 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 13:48:40.567891 | orchestrator | 13:48:40.567 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 13:48:40.567962 | orchestrator | 13:48:40.567 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 13:48:40.568022 | orchestrator | 13:48:40.567 STDOUT terraform:  + file_permission = "0644" 2025-05-14 13:48:40.568094 | orchestrator | 13:48:40.568 STDOUT terraform:  + filename = "inventory.ci" 2025-05-14 13:48:40.568182 | orchestrator | 13:48:40.568 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.568214 | orchestrator | 13:48:40.568 STDOUT terraform:  } 2025-05-14 13:48:40.568491 | orchestrator | 13:48:40.568 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-14 13:48:40.568568 | orchestrator | 13:48:40.568 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-14 13:48:40.568655 | orchestrator | 13:48:40.568 STDOUT terraform:  + content = (sensitive value) 2025-05-14 13:48:40.568802 | orchestrator | 13:48:40.568 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 13:48:40.568901 | orchestrator | 13:48:40.568 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 13:48:40.568984 | orchestrator | 13:48:40.568 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 13:48:40.569055 | orchestrator | 13:48:40.568 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 13:48:40.569135 | orchestrator | 13:48:40.569 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 13:48:40.569207 | orchestrator | 13:48:40.569 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 13:48:40.569254 | orchestrator | 13:48:40.569 STDOUT terraform:  + directory_permission = "0700" 2025-05-14 13:48:40.569301 | orchestrator | 13:48:40.569 STDOUT terraform:  + file_permission = "0600" 2025-05-14 13:48:40.569420 | orchestrator | 13:48:40.569 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-14 13:48:40.569495 | orchestrator | 13:48:40.569 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.569522 | orchestrator | 13:48:40.569 STDOUT terraform:  } 2025-05-14 13:48:40.569581 | orchestrator | 13:48:40.569 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-14 13:48:40.569638 | orchestrator | 13:48:40.569 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-14 13:48:40.569681 | orchestrator | 13:48:40.569 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.569706 | orchestrator | 13:48:40.569 STDOUT terraform:  } 2025-05-14 13:48:40.569804 | orchestrator | 13:48:40.569 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-14 13:48:40.569907 | orchestrator | 13:48:40.569 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-14 13:48:40.569974 | orchestrator | 13:48:40.569 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.570053 | orchestrator | 13:48:40.569 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.570118 | orchestrator | 13:48:40.570 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.570197 | orchestrator | 13:48:40.570 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.570250 | orchestrator | 13:48:40.570 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.570339 | orchestrator | 13:48:40.570 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-14 13:48:40.570428 | orchestrator | 13:48:40.570 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.570449 | orchestrator | 13:48:40.570 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.570501 | orchestrator | 13:48:40.570 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.570528 | orchestrator | 13:48:40.570 STDOUT terraform:  } 2025-05-14 13:48:40.570623 | orchestrator | 13:48:40.570 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-14 13:48:40.570712 | orchestrator | 13:48:40.570 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.570791 | orchestrator | 13:48:40.570 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.570833 | orchestrator | 13:48:40.570 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.570895 | orchestrator | 13:48:40.570 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.570963 | orchestrator | 13:48:40.570 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.571024 | orchestrator | 13:48:40.570 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.571103 | orchestrator | 13:48:40.571 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-14 13:48:40.571165 | orchestrator | 13:48:40.571 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.571221 | orchestrator | 13:48:40.571 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.571272 | orchestrator | 13:48:40.571 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.571298 | orchestrator | 13:48:40.571 STDOUT terraform:  } 2025-05-14 13:48:40.571462 | orchestrator | 13:48:40.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-14 13:48:40.571555 | orchestrator | 13:48:40.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.571626 | orchestrator | 13:48:40.571 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.571668 | orchestrator | 13:48:40.571 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.571751 | orchestrator | 13:48:40.571 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.571791 | orchestrator | 13:48:40.571 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.571888 | orchestrator | 13:48:40.571 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.571957 | orchestrator | 13:48:40.571 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-14 13:48:40.572009 | orchestrator | 13:48:40.571 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.572042 | orchestrator | 13:48:40.572 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.572088 | orchestrator | 13:48:40.572 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.572129 | orchestrator | 13:48:40.572 STDOUT terraform:  } 2025-05-14 13:48:40.572211 | orchestrator | 13:48:40.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-14 13:48:40.572296 | orchestrator | 13:48:40.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.572350 | orchestrator | 13:48:40.572 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.572414 | orchestrator | 13:48:40.572 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.572467 | orchestrator | 13:48:40.572 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.572516 | orchestrator | 13:48:40.572 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.572572 | orchestrator | 13:48:40.572 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.572644 | orchestrator | 13:48:40.572 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-14 13:48:40.572697 | orchestrator | 13:48:40.572 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.572744 | orchestrator | 13:48:40.572 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.572787 | orchestrator | 13:48:40.572 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.572799 | orchestrator | 13:48:40.572 STDOUT terraform:  } 2025-05-14 13:48:40.572882 | orchestrator | 13:48:40.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-14 13:48:40.572973 | orchestrator | 13:48:40.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.573026 | orchestrator | 13:48:40.572 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.573060 | orchestrator | 13:48:40.573 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.573111 | orchestrator | 13:48:40.573 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.573172 | orchestrator | 13:48:40.573 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.573224 | orchestrator | 13:48:40.573 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.573297 | orchestrator | 13:48:40.573 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-14 13:48:40.573365 | orchestrator | 13:48:40.573 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.573438 | orchestrator | 13:48:40.573 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.573451 | orchestrator | 13:48:40.573 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.573463 | orchestrator | 13:48:40.573 STDOUT terraform:  } 2025-05-14 13:48:40.573541 | orchestrator | 13:48:40.573 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-14 13:48:40.573628 | orchestrator | 13:48:40.573 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.573689 | orchestrator | 13:48:40.573 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.573721 | orchestrator | 13:48:40.573 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.573784 | orchestrator | 13:48:40.573 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.573838 | orchestrator | 13:48:40.573 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.573908 | orchestrator | 13:48:40.573 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.573976 | orchestrator | 13:48:40.573 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-14 13:48:40.574055 | orchestrator | 13:48:40.573 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.574089 | orchestrator | 13:48:40.574 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.574124 | orchestrator | 13:48:40.574 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.574146 | orchestrator | 13:48:40.574 STDOUT terraform:  } 2025-05-14 13:48:40.574223 | orchestrator | 13:48:40.574 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-14 13:48:40.574301 | orchestrator | 13:48:40.574 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 13:48:40.574352 | orchestrator | 13:48:40.574 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.574444 | orchestrator | 13:48:40.574 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.574501 | orchestrator | 13:48:40.574 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.574578 | orchestrator | 13:48:40.574 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.574628 | orchestrator | 13:48:40.574 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.574690 | orchestrator | 13:48:40.574 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-14 13:48:40.574739 | orchestrator | 13:48:40.574 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.574778 | orchestrator | 13:48:40.574 STDOUT terraform:  + size = 80 2025-05-14 13:48:40.574818 | orchestrator | 13:48:40.574 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.574839 | orchestrator | 13:48:40.574 STDOUT terraform:  } 2025-05-14 13:48:40.574920 | orchestrator | 13:48:40.574 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-14 13:48:40.574989 | orchestrator | 13:48:40.574 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.575059 | orchestrator | 13:48:40.574 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.575139 | orchestrator | 13:48:40.575 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.575190 | orchestrator | 13:48:40.575 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.575246 | orchestrator | 13:48:40.575 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.575305 | orchestrator | 13:48:40.575 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-14 13:48:40.575353 | orchestrator | 13:48:40.575 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.575433 | orchestrator | 13:48:40.575 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.575464 | orchestrator | 13:48:40.575 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.575475 | orchestrator | 13:48:40.575 STDOUT terraform:  } 2025-05-14 13:48:40.575551 | orchestrator | 13:48:40.575 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-14 13:48:40.575628 | orchestrator | 13:48:40.575 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.575677 | orchestrator | 13:48:40.575 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.575731 | orchestrator | 13:48:40.575 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.575782 | orchestrator | 13:48:40.575 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.575830 | orchestrator | 13:48:40.575 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.575889 | orchestrator | 13:48:40.575 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-14 13:48:40.575950 | orchestrator | 13:48:40.575 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.575981 | orchestrator | 13:48:40.575 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.576027 | orchestrator | 13:48:40.575 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.576060 | orchestrator | 13:48:40.576 STDOUT terraform:  } 2025-05-14 13:48:40.576131 | orchestrator | 13:48:40.576 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-14 13:48:40.576205 | orchestrator | 13:48:40.576 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.576268 | orchestrator | 13:48:40.576 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.576302 | orchestrator | 13:48:40.576 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.576351 | orchestrator | 13:48:40.576 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.576427 | orchestrator | 13:48:40.576 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.576485 | orchestrator | 13:48:40.576 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-14 13:48:40.576534 | orchestrator | 13:48:40.576 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.576565 | orchestrator | 13:48:40.576 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.576597 | orchestrator | 13:48:40.576 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.576608 | orchestrator | 13:48:40.576 STDOUT terraform:  } 2025-05-14 13:48:40.576689 | orchestrator | 13:48:40.576 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-14 13:48:40.576757 | orchestrator | 13:48:40.576 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.576825 | orchestrator | 13:48:40.576 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.576864 | orchestrator | 13:48:40.576 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.576914 | orchestrator | 13:48:40.576 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.576962 | orchestrator | 13:48:40.576 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.577028 | orchestrator | 13:48:40.576 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-14 13:48:40.577079 | orchestrator | 13:48:40.577 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.577110 | orchestrator | 13:48:40.577 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.577141 | orchestrator | 13:48:40.577 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.577161 | orchestrator | 13:48:40.577 STDOUT terraform:  } 2025-05-14 13:48:40.577239 | orchestrator | 13:48:40.577 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-14 13:48:40.577321 | orchestrator | 13:48:40.577 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.577370 | orchestrator | 13:48:40.577 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.577419 | orchestrator | 13:48:40.577 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.577481 | orchestrator | 13:48:40.577 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.577542 | orchestrator | 13:48:40.577 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.577609 | orchestrator | 13:48:40.577 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-14 13:48:40.577666 | orchestrator | 13:48:40.577 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.577702 | orchestrator | 13:48:40.577 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.577729 | orchestrator | 13:48:40.577 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.577749 | orchestrator | 13:48:40.577 STDOUT terraform:  } 2025-05-14 13:48:40.577819 | orchestrator | 13:48:40.577 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-14 13:48:40.577900 | orchestrator | 13:48:40.577 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.577957 | orchestrator | 13:48:40.577 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.578006 | orchestrator | 13:48:40.577 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.578083 | orchestrator | 13:48:40.578 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.578131 | orchestrator | 13:48:40.578 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.578191 | orchestrator | 13:48:40.578 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-14 13:48:40.578238 | orchestrator | 13:48:40.578 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.578269 | orchestrator | 13:48:40.578 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.578302 | orchestrator | 13:48:40.578 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.578322 | orchestrator | 13:48:40.578 STDOUT terraform:  } 2025-05-14 13:48:40.578461 | orchestrator | 13:48:40.578 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-14 13:48:40.578540 | orchestrator | 13:48:40.578 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.578599 | orchestrator | 13:48:40.578 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.578628 | orchestrator | 13:48:40.578 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.578671 | orchestrator | 13:48:40.578 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.578714 | orchestrator | 13:48:40.578 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.578767 | orchestrator | 13:48:40.578 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-14 13:48:40.578850 | orchestrator | 13:48:40.578 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.578881 | orchestrator | 13:48:40.578 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.578911 | orchestrator | 13:48:40.578 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.578929 | orchestrator | 13:48:40.578 STDOUT terraform:  } 2025-05-14 13:48:40.579018 | orchestrator | 13:48:40.578 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-14 13:48:40.579085 | orchestrator | 13:48:40.579 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.579128 | orchestrator | 13:48:40.579 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.579155 | orchestrator | 13:48:40.579 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.579206 | orchestrator | 13:48:40.579 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.579248 | orchestrator | 13:48:40.579 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.579299 | orchestrator | 13:48:40.579 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-14 13:48:40.579341 | orchestrator | 13:48:40.579 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.579370 | orchestrator | 13:48:40.579 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.579422 | orchestrator | 13:48:40.579 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.579446 | orchestrator | 13:48:40.579 STDOUT terraform:  } 2025-05-14 13:48:40.579509 | orchestrator | 13:48:40.579 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-14 13:48:40.579574 | orchestrator | 13:48:40.579 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 13:48:40.579630 | orchestrator | 13:48:40.579 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 13:48:40.579658 | orchestrator | 13:48:40.579 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.579701 | orchestrator | 13:48:40.579 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.579752 | orchestrator | 13:48:40.579 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 13:48:40.579802 | orchestrator | 13:48:40.579 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-14 13:48:40.579845 | orchestrator | 13:48:40.579 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.579872 | orchestrator | 13:48:40.579 STDOUT terraform:  + size = 20 2025-05-14 13:48:40.579906 | orchestrator | 13:48:40.579 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 13:48:40.579938 | orchestrator | 13:48:40.579 STDOUT terraform:  } 2025-05-14 13:48:40.579999 | orchestrator | 13:48:40.579 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-14 13:48:40.580058 | orchestrator | 13:48:40.579 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-14 13:48:40.580113 | orchestrator | 13:48:40.580 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.580161 | orchestrator | 13:48:40.580 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.580210 | orchestrator | 13:48:40.580 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.580258 | orchestrator | 13:48:40.580 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.580310 | orchestrator | 13:48:40.580 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.580339 | orchestrator | 13:48:40.580 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.580410 | orchestrator | 13:48:40.580 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.580473 | orchestrator | 13:48:40.580 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.580522 | orchestrator | 13:48:40.580 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-14 13:48:40.580555 | orchestrator | 13:48:40.580 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.580604 | orchestrator | 13:48:40.580 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.580659 | orchestrator | 13:48:40.580 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.580708 | orchestrator | 13:48:40.580 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.580754 | orchestrator | 13:48:40.580 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.580798 | orchestrator | 13:48:40.580 STDOUT terraform:  + name = "testbed-manager" 2025-05-14 13:48:40.580837 | orchestrator | 13:48:40.580 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.580974 | orchestrator | 13:48:40.580 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.581034 | orchestrator | 13:48:40.580 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.581079 | orchestrator | 13:48:40.581 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.581136 | orchestrator | 13:48:40.581 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.581185 | orchestrator | 13:48:40.581 STDOUT terraform:  + user_data = (known after apply) 2025-05-14 13:48:40.581207 | orchestrator | 13:48:40.581 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.581241 | orchestrator | 13:48:40.581 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.581280 | orchestrator | 13:48:40.581 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.581334 | orchestrator | 13:48:40.581 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.581375 | orchestrator | 13:48:40.581 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.581431 | orchestrator | 13:48:40.581 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.581492 | orchestrator | 13:48:40.581 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.581521 | orchestrator | 13:48:40.581 STDOUT terraform:  } 2025-05-14 13:48:40.581547 | orchestrator | 13:48:40.581 STDOUT terraform:  + network { 2025-05-14 13:48:40.581567 | orchestrator | 13:48:40.581 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.581612 | orchestrator | 13:48:40.581 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.581661 | orchestrator | 13:48:40.581 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.581699 | orchestrator | 13:48:40.581 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.581737 | orchestrator | 13:48:40.581 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.581775 | orchestrator | 13:48:40.581 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.581826 | orchestrator | 13:48:40.581 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.581845 | orchestrator | 13:48:40.581 STDOUT terraform:  } 2025-05-14 13:48:40.581862 | orchestrator | 13:48:40.581 STDOUT terraform:  } 2025-05-14 13:48:40.581930 | orchestrator | 13:48:40.581 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-14 13:48:40.581981 | orchestrator | 13:48:40.581 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.582054 | orchestrator | 13:48:40.581 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.582097 | orchestrator | 13:48:40.582 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.582140 | orchestrator | 13:48:40.582 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.582190 | orchestrator | 13:48:40.582 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.582219 | orchestrator | 13:48:40.582 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.582244 | orchestrator | 13:48:40.582 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.582287 | orchestrator | 13:48:40.582 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.582331 | orchestrator | 13:48:40.582 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.582400 | orchestrator | 13:48:40.582 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.582443 | orchestrator | 13:48:40.582 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.582491 | orchestrator | 13:48:40.582 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.582542 | orchestrator | 13:48:40.582 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.582588 | orchestrator | 13:48:40.582 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.582624 | orchestrator | 13:48:40.582 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.582663 | orchestrator | 13:48:40.582 STDOUT terraform:  + name = "testbed-node-0" 2025-05-14 13:48:40.582700 | orchestrator | 13:48:40.582 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.582744 | orchestrator | 13:48:40.582 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.582793 | orchestrator | 13:48:40.582 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.582822 | orchestrator | 13:48:40.582 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.582888 | orchestrator | 13:48:40.582 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.582966 | orchestrator | 13:48:40.582 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.582994 | orchestrator | 13:48:40.582 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.583024 | orchestrator | 13:48:40.582 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.583058 | orchestrator | 13:48:40.583 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.583101 | orchestrator | 13:48:40.583 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.583137 | orchestrator | 13:48:40.583 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.583173 | orchestrator | 13:48:40.583 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.583222 | orchestrator | 13:48:40.583 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.583249 | orchestrator | 13:48:40.583 STDOUT terraform:  } 2025-05-14 13:48:40.583278 | orchestrator | 13:48:40.583 STDOUT terraform:  + network { 2025-05-14 13:48:40.583304 | orchestrator | 13:48:40.583 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.583355 | orchestrator | 13:48:40.583 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.583427 | orchestrator | 13:48:40.583 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.583446 | orchestrator | 13:48:40.583 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.583490 | orchestrator | 13:48:40.583 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.583524 | orchestrator | 13:48:40.583 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.583568 | orchestrator | 13:48:40.583 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.583586 | orchestrator | 13:48:40.583 STDOUT terraform:  } 2025-05-14 13:48:40.583601 | orchestrator | 13:48:40.583 STDOUT terraform:  } 2025-05-14 13:48:40.583656 | orchestrator | 13:48:40.583 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-14 13:48:40.584098 | orchestrator | 13:48:40.583 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.584168 | orchestrator | 13:48:40.584 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.584224 | orchestrator | 13:48:40.584 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.584271 | orchestrator | 13:48:40.584 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.584322 | orchestrator | 13:48:40.584 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.584353 | orchestrator | 13:48:40.584 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.584412 | orchestrator | 13:48:40.584 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.584507 | orchestrator | 13:48:40.584 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.584575 | orchestrator | 13:48:40.584 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.584630 | orchestrator | 13:48:40.584 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.584691 | orchestrator | 13:48:40.584 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.584737 | orchestrator | 13:48:40.584 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.584811 | orchestrator | 13:48:40.584 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.584871 | orchestrator | 13:48:40.584 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.584985 | orchestrator | 13:48:40.584 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.585068 | orchestrator | 13:48:40.584 STDOUT terraform:  + name = "testbed-node-1" 2025-05-14 13:48:40.585116 | orchestrator | 13:48:40.585 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.585161 | orchestrator | 13:48:40.585 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.585206 | orchestrator | 13:48:40.585 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.585235 | orchestrator | 13:48:40.585 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.585278 | orchestrator | 13:48:40.585 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.585332 | orchestrator | 13:48:40.585 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.585356 | orchestrator | 13:48:40.585 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.585419 | orchestrator | 13:48:40.585 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.585453 | orchestrator | 13:48:40.585 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.585539 | orchestrator | 13:48:40.585 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.585571 | orchestrator | 13:48:40.585 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.585621 | orchestrator | 13:48:40.585 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.585663 | orchestrator | 13:48:40.585 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.585684 | orchestrator | 13:48:40.585 STDOUT terraform:  } 2025-05-14 13:48:40.585699 | orchestrator | 13:48:40.585 STDOUT terraform:  + network { 2025-05-14 13:48:40.585726 | orchestrator | 13:48:40.585 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.585759 | orchestrator | 13:48:40.585 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.585796 | orchestrator | 13:48:40.585 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.585830 | orchestrator | 13:48:40.585 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.585881 | orchestrator | 13:48:40.585 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.585941 | orchestrator | 13:48:40.585 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.585992 | orchestrator | 13:48:40.585 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.586032 | orchestrator | 13:48:40.585 STDOUT terraform:  } 2025-05-14 13:48:40.586053 | orchestrator | 13:48:40.586 STDOUT terraform:  } 2025-05-14 13:48:40.586117 | orchestrator | 13:48:40.586 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-14 13:48:40.586192 | orchestrator | 13:48:40.586 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.586228 | orchestrator | 13:48:40.586 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.586295 | orchestrator | 13:48:40.586 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.586344 | orchestrator | 13:48:40.586 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.586422 | orchestrator | 13:48:40.586 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.586476 | orchestrator | 13:48:40.586 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.586495 | orchestrator | 13:48:40.586 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.586537 | orchestrator | 13:48:40.586 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.586588 | orchestrator | 13:48:40.586 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.586627 | orchestrator | 13:48:40.586 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.586650 | orchestrator | 13:48:40.586 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.586692 | orchestrator | 13:48:40.586 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.586729 | orchestrator | 13:48:40.586 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.586769 | orchestrator | 13:48:40.586 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.586809 | orchestrator | 13:48:40.586 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.586860 | orchestrator | 13:48:40.586 STDOUT terraform:  + name = "testbed-node-2" 2025-05-14 13:48:40.586913 | orchestrator | 13:48:40.586 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.586955 | orchestrator | 13:48:40.586 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.587007 | orchestrator | 13:48:40.586 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.587041 | orchestrator | 13:48:40.587 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.587080 | orchestrator | 13:48:40.587 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.587136 | orchestrator | 13:48:40.587 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.587155 | orchestrator | 13:48:40.587 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.587201 | orchestrator | 13:48:40.587 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.587231 | orchestrator | 13:48:40.587 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.587282 | orchestrator | 13:48:40.587 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.587343 | orchestrator | 13:48:40.587 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.587435 | orchestrator | 13:48:40.587 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.587520 | orchestrator | 13:48:40.587 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.587557 | orchestrator | 13:48:40.587 STDOUT terraform:  } 2025-05-14 13:48:40.587601 | orchestrator | 13:48:40.587 STDOUT terraform:  + network { 2025-05-14 13:48:40.587624 | orchestrator | 13:48:40.587 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.587662 | orchestrator | 13:48:40.587 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.587695 | orchestrator | 13:48:40.587 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.587731 | orchestrator | 13:48:40.587 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.587779 | orchestrator | 13:48:40.587 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.587817 | orchestrator | 13:48:40.587 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.587879 | orchestrator | 13:48:40.587 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.587900 | orchestrator | 13:48:40.587 STDOUT terraform:  } 2025-05-14 13:48:40.587916 | orchestrator | 13:48:40.587 STDOUT terraform:  } 2025-05-14 13:48:40.587965 | orchestrator | 13:48:40.587 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-14 13:48:40.588061 | orchestrator | 13:48:40.587 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.588107 | orchestrator | 13:48:40.588 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.588144 | orchestrator | 13:48:40.588 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.588200 | orchestrator | 13:48:40.588 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.588239 | orchestrator | 13:48:40.588 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.588284 | orchestrator | 13:48:40.588 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.588334 | orchestrator | 13:48:40.588 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.588375 | orchestrator | 13:48:40.588 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.588504 | orchestrator | 13:48:40.588 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.588509 | orchestrator | 13:48:40.588 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.588515 | orchestrator | 13:48:40.588 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.588559 | orchestrator | 13:48:40.588 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.588616 | orchestrator | 13:48:40.588 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.588653 | orchestrator | 13:48:40.588 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.588711 | orchestrator | 13:48:40.588 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.588796 | orchestrator | 13:48:40.588 STDOUT terraform:  + name = "testbed-node-3" 2025-05-14 13:48:40.588827 | orchestrator | 13:48:40.588 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.590564 | orchestrator | 13:48:40.588 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.590616 | orchestrator | 13:48:40.588 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.590621 | orchestrator | 13:48:40.589 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.590627 | orchestrator | 13:48:40.589 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.590632 | orchestrator | 13:48:40.589 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.590653 | orchestrator | 13:48:40.589 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.590657 | orchestrator | 13:48:40.589 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.590661 | orchestrator | 13:48:40.589 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.590666 | orchestrator | 13:48:40.589 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.590670 | orchestrator | 13:48:40.589 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.590674 | orchestrator | 13:48:40.589 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.590678 | orchestrator | 13:48:40.589 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.590682 | orchestrator | 13:48:40.589 STDOUT terraform:  } 2025-05-14 13:48:40.590686 | orchestrator | 13:48:40.589 STDOUT terraform:  + network { 2025-05-14 13:48:40.590690 | orchestrator | 13:48:40.589 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.590694 | orchestrator | 13:48:40.589 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.590698 | orchestrator | 13:48:40.589 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.590702 | orchestrator | 13:48:40.589 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.590706 | orchestrator | 13:48:40.589 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.590710 | orchestrator | 13:48:40.589 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.590714 | orchestrator | 13:48:40.589 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.590718 | orchestrator | 13:48:40.589 STDOUT terraform:  } 2025-05-14 13:48:40.590722 | orchestrator | 13:48:40.589 STDOUT terraform:  } 2025-05-14 13:48:40.590726 | orchestrator | 13:48:40.589 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-14 13:48:40.590730 | orchestrator | 13:48:40.590 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.590734 | orchestrator | 13:48:40.590 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.590738 | orchestrator | 13:48:40.590 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.590753 | orchestrator | 13:48:40.590 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.590757 | orchestrator | 13:48:40.590 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.590761 | orchestrator | 13:48:40.590 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.590765 | orchestrator | 13:48:40.590 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.590769 | orchestrator | 13:48:40.590 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.590773 | orchestrator | 13:48:40.590 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.590777 | orchestrator | 13:48:40.590 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.590787 | orchestrator | 13:48:40.590 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.590791 | orchestrator | 13:48:40.590 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.590795 | orchestrator | 13:48:40.590 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.590799 | orchestrator | 13:48:40.590 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.590803 | orchestrator | 13:48:40.590 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.590806 | orchestrator | 13:48:40.590 STDOUT terraform:  + name = "testbed-node-4" 2025-05-14 13:48:40.590810 | orchestrator | 13:48:40.590 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.590816 | orchestrator | 13:48:40.590 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.591171 | orchestrator | 13:48:40.590 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.591204 | orchestrator | 13:48:40.591 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.591243 | orchestrator | 13:48:40.591 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.591296 | orchestrator | 13:48:40.591 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.591323 | orchestrator | 13:48:40.591 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.591348 | orchestrator | 13:48:40.591 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.591397 | orchestrator | 13:48:40.591 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.591427 | orchestrator | 13:48:40.591 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.591457 | orchestrator | 13:48:40.591 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.591489 | orchestrator | 13:48:40.591 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.591531 | orchestrator | 13:48:40.591 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.591549 | orchestrator | 13:48:40.591 STDOUT terraform:  } 2025-05-14 13:48:40.591567 | orchestrator | 13:48:40.591 STDOUT terraform:  + network { 2025-05-14 13:48:40.591590 | orchestrator | 13:48:40.591 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.591622 | orchestrator | 13:48:40.591 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.591662 | orchestrator | 13:48:40.591 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.591693 | orchestrator | 13:48:40.591 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.591726 | orchestrator | 13:48:40.591 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.591759 | orchestrator | 13:48:40.591 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.591793 | orchestrator | 13:48:40.591 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.591808 | orchestrator | 13:48:40.591 STDOUT terraform:  } 2025-05-14 13:48:40.591823 | orchestrator | 13:48:40.591 STDOUT terraform:  } 2025-05-14 13:48:40.591867 | orchestrator | 13:48:40.591 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-14 13:48:40.591909 | orchestrator | 13:48:40.591 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 13:48:40.591946 | orchestrator | 13:48:40.591 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 13:48:40.591980 | orchestrator | 13:48:40.591 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 13:48:40.592016 | orchestrator | 13:48:40.591 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 13:48:40.592054 | orchestrator | 13:48:40.592 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.592083 | orchestrator | 13:48:40.592 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 13:48:40.592125 | orchestrator | 13:48:40.592 STDOUT terraform:  + config_drive = true 2025-05-14 13:48:40.592174 | orchestrator | 13:48:40.592 STDOUT terraform:  + created = (known after apply) 2025-05-14 13:48:40.592210 | orchestrator | 13:48:40.592 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 13:48:40.592241 | orchestrator | 13:48:40.592 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 13:48:40.592266 | orchestrator | 13:48:40.592 STDOUT terraform:  + force_delete = false 2025-05-14 13:48:40.592303 | orchestrator | 13:48:40.592 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.592341 | orchestrator | 13:48:40.592 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 13:48:40.592394 | orchestrator | 13:48:40.592 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 13:48:40.592429 | orchestrator | 13:48:40.592 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 13:48:40.592458 | orchestrator | 13:48:40.592 STDOUT terraform:  + name = "testbed-node-5" 2025-05-14 13:48:40.592486 | orchestrator | 13:48:40.592 STDOUT terraform:  + power_state = "active" 2025-05-14 13:48:40.592541 | orchestrator | 13:48:40.592 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.592571 | orchestrator | 13:48:40.592 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 13:48:40.592607 | orchestrator | 13:48:40.592 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 13:48:40.592674 | orchestrator | 13:48:40.592 STDOUT terraform:  + updated = (known after apply) 2025-05-14 13:48:40.592716 | orchestrator | 13:48:40.592 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 13:48:40.592736 | orchestrator | 13:48:40.592 STDOUT terraform:  + block_device { 2025-05-14 13:48:40.592761 | orchestrator | 13:48:40.592 STDOUT terraform:  + boot_index = 0 2025-05-14 13:48:40.592794 | orchestrator | 13:48:40.592 STDOUT terraform:  + delete_on_termination = false 2025-05-14 13:48:40.592830 | orchestrator | 13:48:40.592 STDOUT terraform:  + destination_type = "volume" 2025-05-14 13:48:40.592863 | orchestrator | 13:48:40.592 STDOUT terraform:  + multiattach = false 2025-05-14 13:48:40.592896 | orchestrator | 13:48:40.592 STDOUT terraform:  + source_type = "volume" 2025-05-14 13:48:40.592937 | orchestrator | 13:48:40.592 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.592954 | orchestrator | 13:48:40.592 STDOUT terraform:  } 2025-05-14 13:48:40.592960 | orchestrator | 13:48:40.592 STDOUT terraform:  + network { 2025-05-14 13:48:40.592987 | orchestrator | 13:48:40.592 STDOUT terraform:  + access_network = false 2025-05-14 13:48:40.593020 | orchestrator | 13:48:40.592 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 13:48:40.593053 | orchestrator | 13:48:40.593 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 13:48:40.593086 | orchestrator | 13:48:40.593 STDOUT terraform:  + mac = (known after apply) 2025-05-14 13:48:40.593133 | orchestrator | 13:48:40.593 STDOUT terraform:  + name = (known after apply) 2025-05-14 13:48:40.593170 | orchestrator | 13:48:40.593 STDOUT terraform:  + port = (known after apply) 2025-05-14 13:48:40.593205 | orchestrator | 13:48:40.593 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 13:48:40.593211 | orchestrator | 13:48:40.593 STDOUT terraform:  } 2025-05-14 13:48:40.593230 | orchestrator | 13:48:40.593 STDOUT terraform:  } 2025-05-14 13:48:40.593262 | orchestrator | 13:48:40.593 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-14 13:48:40.593298 | orchestrator | 13:48:40.593 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-14 13:48:40.593328 | orchestrator | 13:48:40.593 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-14 13:48:40.593357 | orchestrator | 13:48:40.593 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.593397 | orchestrator | 13:48:40.593 STDOUT terraform:  + name = "testbed" 2025-05-14 13:48:40.593415 | orchestrator | 13:48:40.593 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 13:48:40.593583 | orchestrator | 13:48:40.593 STDOUT terraform:  + public_key = (known after apply) 2025-05-14 13:48:40.593664 | orchestrator | 13:48:40.593 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.593680 | orchestrator | 13:48:40.593 STDOUT terraform:  + user_id = (known after apply) 2025-05-14 13:48:40.593692 | orchestrator | 13:48:40.593 STDOUT terraform:  } 2025-05-14 13:48:40.593714 | orchestrator | 13:48:40.593 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-14 13:48:40.593735 | orchestrator | 13:48:40.593 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.593781 | orchestrator | 13:48:40.593 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.593802 | orchestrator | 13:48:40.593 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.593819 | orchestrator | 13:48:40.593 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.593838 | orchestrator | 13:48:40.593 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.593863 | orchestrator | 13:48:40.593 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.593875 | orchestrator | 13:48:40.593 STDOUT terraform:  } 2025-05-14 13:48:40.593886 | orchestrator | 13:48:40.593 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-14 13:48:40.593897 | orchestrator | 13:48:40.593 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.593908 | orchestrator | 13:48:40.593 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.593925 | orchestrator | 13:48:40.593 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.593937 | orchestrator | 13:48:40.593 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.593947 | orchestrator | 13:48:40.593 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.593961 | orchestrator | 13:48:40.593 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.593972 | orchestrator | 13:48:40.593 STDOUT terraform:  } 2025-05-14 13:48:40.594049 | orchestrator | 13:48:40.593 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-14 13:48:40.594108 | orchestrator | 13:48:40.594 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.594121 | orchestrator | 13:48:40.594 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.594159 | orchestrator | 13:48:40.594 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.594175 | orchestrator | 13:48:40.594 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.594213 | orchestrator | 13:48:40.594 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.594228 | orchestrator | 13:48:40.594 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.594243 | orchestrator | 13:48:40.594 STDOUT terraform:  } 2025-05-14 13:48:40.594303 | orchestrator | 13:48:40.594 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-14 13:48:40.594343 | orchestrator | 13:48:40.594 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.594359 | orchestrator | 13:48:40.594 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.594436 | orchestrator | 13:48:40.594 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.594454 | orchestrator | 13:48:40.594 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.594482 | orchestrator | 13:48:40.594 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.594533 | orchestrator | 13:48:40.594 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.594546 | orchestrator | 13:48:40.594 STDOUT terraform:  } 2025-05-14 13:48:40.594571 | orchestrator | 13:48:40.594 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-14 13:48:40.594623 | orchestrator | 13:48:40.594 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.594640 | orchestrator | 13:48:40.594 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.594677 | orchestrator | 13:48:40.594 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.594716 | orchestrator | 13:48:40.594 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.594732 | orchestrator | 13:48:40.594 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.594774 | orchestrator | 13:48:40.594 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.594795 | orchestrator | 13:48:40.594 STDOUT terraform:  } 2025-05-14 13:48:40.594815 | orchestrator | 13:48:40.594 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-14 13:48:40.594875 | orchestrator | 13:48:40.594 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.594892 | orchestrator | 13:48:40.594 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.594941 | orchestrator | 13:48:40.594 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.594954 | orchestrator | 13:48:40.594 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.594968 | orchestrator | 13:48:40.594 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.595005 | orchestrator | 13:48:40.594 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.595017 | orchestrator | 13:48:40.594 STDOUT terraform:  } 2025-05-14 13:48:40.595068 | orchestrator | 13:48:40.594 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-14 13:48:40.595119 | orchestrator | 13:48:40.595 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.595135 | orchestrator | 13:48:40.595 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.595184 | orchestrator | 13:48:40.595 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.595200 | orchestrator | 13:48:40.595 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.595247 | orchestrator | 13:48:40.595 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.595263 | orchestrator | 13:48:40.595 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.595277 | orchestrator | 13:48:40.595 STDOUT terraform:  } 2025-05-14 13:48:40.595334 | orchestrator | 13:48:40.595 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-14 13:48:40.595439 | orchestrator | 13:48:40.595 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.595456 | orchestrator | 13:48:40.595 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.595467 | orchestrator | 13:48:40.595 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.595482 | orchestrator | 13:48:40.595 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.595503 | orchestrator | 13:48:40.595 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.595517 | orchestrator | 13:48:40.595 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.595528 | orchestrator | 13:48:40.595 STDOUT terraform:  } 2025-05-14 13:48:40.595599 | orchestrator | 13:48:40.595 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-14 13:48:40.595653 | orchestrator | 13:48:40.595 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 13:48:40.595666 | orchestrator | 13:48:40.595 STDOUT terraform:  + device = (known after apply) 2025-05-14 13:48:40.595681 | orchestrator | 13:48:40.595 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.595705 | orchestrator | 13:48:40.595 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 13:48:40.595743 | orchestrator | 13:48:40.595 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.595759 | orchestrator | 13:48:40.595 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 13:48:40.595770 | orchestrator | 13:48:40.595 STDOUT terraform:  } 2025-05-14 13:48:40.595832 | orchestrator | 13:48:40.595 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-14 13:48:40.595889 | orchestrator | 13:48:40.595 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-14 13:48:40.595905 | orchestrator | 13:48:40.595 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 13:48:40.595921 | orchestrator | 13:48:40.595 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-14 13:48:40.595959 | orchestrator | 13:48:40.595 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.595974 | orchestrator | 13:48:40.595 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 13:48:40.596012 | orchestrator | 13:48:40.595 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.596024 | orchestrator | 13:48:40.595 STDOUT terraform:  } 2025-05-14 13:48:40.596063 | orchestrator | 13:48:40.596 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-14 13:48:40.596117 | orchestrator | 13:48:40.596 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-14 13:48:40.596133 | orchestrator | 13:48:40.596 STDOUT terraform:  + address = (known after apply) 2025-05-14 13:48:40.596171 | orchestrator | 13:48:40.596 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.596187 | orchestrator | 13:48:40.596 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 13:48:40.596224 | orchestrator | 13:48:40.596 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.596240 | orchestrator | 13:48:40.596 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 13:48:40.596254 | orchestrator | 13:48:40.596 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.596302 | orchestrator | 13:48:40.596 STDOUT terraform:  + pool = "public" 2025-05-14 13:48:40.596316 | orchestrator | 13:48:40.596 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 13:48:40.596339 | orchestrator | 13:48:40.596 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.596353 | orchestrator | 13:48:40.596 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.596415 | orchestrator | 13:48:40.596 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.596429 | orchestrator | 13:48:40.596 STDOUT terraform:  } 2025-05-14 13:48:40.596444 | orchestrator | 13:48:40.596 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-14 13:48:40.596502 | orchestrator | 13:48:40.596 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-14 13:48:40.596543 | orchestrator | 13:48:40.596 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.596559 | orchestrator | 13:48:40.596 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.596597 | orchestrator | 13:48:40.596 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 13:48:40.596609 | orchestrator | 13:48:40.596 STDOUT terraform:  + "nova", 2025-05-14 13:48:40.596624 | orchestrator | 13:48:40.596 STDOUT terraform:  ] 2025-05-14 13:48:40.596638 | orchestrator | 13:48:40.596 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 13:48:40.596690 | orchestrator | 13:48:40.596 STDOUT terraform:  + external = (known after apply) 2025-05-14 13:48:40.596740 | orchestrator | 13:48:40.596 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.596756 | orchestrator | 13:48:40.596 STDOUT terraform:  + mtu = (known after apply) 2025-05-14 13:48:40.596805 | orchestrator | 13:48:40.596 STDOUT terraform:  + name = "net-testbed-management" 2025-05-14 13:48:40.596820 | orchestrator | 13:48:40.596 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.596874 | orchestrator | 13:48:40.596 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.596891 | orchestrator | 13:48:40.596 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.596951 | orchestrator | 13:48:40.596 STDOUT terraform:  + shared = (known after apply) 2025-05-14 13:48:40.597005 | orchestrator | 13:48:40.596 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.597039 | orchestrator | 13:48:40.596 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-14 13:48:40.597060 | orchestrator | 13:48:40.597 STDOUT terraform:  + segments (known after apply) 2025-05-14 13:48:40.597084 | orchestrator | 13:48:40.597 STDOUT terraform:  } 2025-05-14 13:48:40.597099 | orchestrator | 13:48:40.597 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-14 13:48:40.597142 | orchestrator | 13:48:40.597 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-14 13:48:40.597158 | orchestrator | 13:48:40.597 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.597234 | orchestrator | 13:48:40.597 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.597260 | orchestrator | 13:48:40.597 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.597284 | orchestrator | 13:48:40.597 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.597361 | orchestrator | 13:48:40.597 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.597414 | orchestrator | 13:48:40.597 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.597442 | orchestrator | 13:48:40.597 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.597468 | orchestrator | 13:48:40.597 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.597518 | orchestrator | 13:48:40.597 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.597534 | orchestrator | 13:48:40.597 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.597583 | orchestrator | 13:48:40.597 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.597600 | orchestrator | 13:48:40.597 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.597656 | orchestrator | 13:48:40.597 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.597674 | orchestrator | 13:48:40.597 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.597726 | orchestrator | 13:48:40.597 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.597742 | orchestrator | 13:48:40.597 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.597792 | orchestrator | 13:48:40.597 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.597804 | orchestrator | 13:48:40.597 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.597819 | orchestrator | 13:48:40.597 STDOUT terraform:  } 2025-05-14 13:48:40.597833 | orchestrator | 13:48:40.597 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.597848 | orchestrator | 13:48:40.597 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.597862 | orchestrator | 13:48:40.597 STDOUT terraform:  } 2025-05-14 13:48:40.597892 | orchestrator | 13:48:40.597 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.597907 | orchestrator | 13:48:40.597 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.597922 | orchestrator | 13:48:40.597 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-14 13:48:40.597961 | orchestrator | 13:48:40.597 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.597974 | orchestrator | 13:48:40.597 STDOUT terraform:  } 2025-05-14 13:48:40.597989 | orchestrator | 13:48:40.597 STDOUT terraform:  } 2025-05-14 13:48:40.598043 | orchestrator | 13:48:40.597 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-14 13:48:40.598092 | orchestrator | 13:48:40.598 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.598134 | orchestrator | 13:48:40.598 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.598150 | orchestrator | 13:48:40.598 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.598229 | orchestrator | 13:48:40.598 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.598268 | orchestrator | 13:48:40.598 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.598295 | orchestrator | 13:48:40.598 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.598339 | orchestrator | 13:48:40.598 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.598404 | orchestrator | 13:48:40.598 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.598450 | orchestrator | 13:48:40.598 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.598496 | orchestrator | 13:48:40.598 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.598540 | orchestrator | 13:48:40.598 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.598579 | orchestrator | 13:48:40.598 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.598616 | orchestrator | 13:48:40.598 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.598654 | orchestrator | 13:48:40.598 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.598694 | orchestrator | 13:48:40.598 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.598732 | orchestrator | 13:48:40.598 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.598769 | orchestrator | 13:48:40.598 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.598786 | orchestrator | 13:48:40.598 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.598811 | orchestrator | 13:48:40.598 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.598831 | orchestrator | 13:48:40.598 STDOUT terraform:  } 2025-05-14 13:48:40.598854 | orchestrator | 13:48:40.598 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.598870 | orchestrator | 13:48:40.598 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.598889 | orchestrator | 13:48:40.598 STDOUT terraform:  } 2025-05-14 13:48:40.598908 | orchestrator | 13:48:40.598 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.598932 | orchestrator | 13:48:40.598 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.598951 | orchestrator | 13:48:40.598 STDOUT terraform:  } 2025-05-14 13:48:40.598975 | orchestrator | 13:48:40.598 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.598995 | orchestrator | 13:48:40.598 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.599014 | orchestrator | 13:48:40.598 STDOUT terraform:  } 2025-05-14 13:48:40.599032 | orchestrator | 13:48:40.598 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.599047 | orchestrator | 13:48:40.598 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.599071 | orchestrator | 13:48:40.599 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-14 13:48:40.599261 | orchestrator | 13:48:40.599 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.599282 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.599299 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.599318 | orchestrator | 13:48:40.599 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-14 13:48:40.599351 | orchestrator | 13:48:40.599 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.599373 | orchestrator | 13:48:40.599 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.599426 | orchestrator | 13:48:40.599 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.599438 | orchestrator | 13:48:40.599 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.599449 | orchestrator | 13:48:40.599 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.599460 | orchestrator | 13:48:40.599 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.599474 | orchestrator | 13:48:40.599 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.599488 | orchestrator | 13:48:40.599 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.599545 | orchestrator | 13:48:40.599 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.599561 | orchestrator | 13:48:40.599 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.599607 | orchestrator | 13:48:40.599 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.599647 | orchestrator | 13:48:40.599 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.599663 | orchestrator | 13:48:40.599 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.599709 | orchestrator | 13:48:40.599 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.599738 | orchestrator | 13:48:40.599 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.599763 | orchestrator | 13:48:40.599 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.599824 | orchestrator | 13:48:40.599 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.599847 | orchestrator | 13:48:40.599 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.599874 | orchestrator | 13:48:40.599 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.599895 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.599915 | orchestrator | 13:48:40.599 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.599941 | orchestrator | 13:48:40.599 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.599962 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.599981 | orchestrator | 13:48:40.599 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.600010 | orchestrator | 13:48:40.599 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.600022 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.600033 | orchestrator | 13:48:40.599 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.600044 | orchestrator | 13:48:40.599 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.600058 | orchestrator | 13:48:40.599 STDOUT terraform:  } 2025-05-14 13:48:40.600070 | orchestrator | 13:48:40.600 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.600096 | orchestrator | 13:48:40.600 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.600111 | orchestrator | 13:48:40.600 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-14 13:48:40.600122 | orchestrator | 13:48:40.600 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.600133 | orchestrator | 13:48:40.600 STDOUT terraform:  } 2025-05-14 13:48:40.600147 | orchestrator | 13:48:40.600 STDOUT terraform:  } 2025-05-14 13:48:40.600166 | orchestrator | 13:48:40.600 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-14 13:48:40.600230 | orchestrator | 13:48:40.600 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.600271 | orchestrator | 13:48:40.600 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.600297 | orchestrator | 13:48:40.600 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.600344 | orchestrator | 13:48:40.600 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.600512 | orchestrator | 13:48:40.600 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.600546 | orchestrator | 13:48:40.600 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.600556 | orchestrator | 13:48:40.600 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.600566 | orchestrator | 13:48:40.600 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.600580 | orchestrator | 13:48:40.600 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.600591 | orchestrator | 13:48:40.600 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.600604 | orchestrator | 13:48:40.600 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.600645 | orchestrator | 13:48:40.600 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.600694 | orchestrator | 13:48:40.600 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.603546 | orchestrator | 13:48:40.600 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.606122 | orchestrator | 13:48:40.603 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.606263 | orchestrator | 13:48:40.606 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.606330 | orchestrator | 13:48:40.606 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.606367 | orchestrator | 13:48:40.606 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.606439 | orchestrator | 13:48:40.606 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.606468 | orchestrator | 13:48:40.606 STDOUT terraform:  } 2025-05-14 13:48:40.606501 | orchestrator | 13:48:40.606 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.606544 | orchestrator | 13:48:40.606 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.606571 | orchestrator | 13:48:40.606 STDOUT terraform:  } 2025-05-14 13:48:40.606625 | orchestrator | 13:48:40.606 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.606698 | orchestrator | 13:48:40.606 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.606738 | orchestrator | 13:48:40.606 STDOUT terraform:  } 2025-05-14 13:48:40.606782 | orchestrator | 13:48:40.606 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.606842 | orchestrator | 13:48:40.606 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.606879 | orchestrator | 13:48:40.606 STDOUT terraform:  2025-05-14 13:48:40.607020 | orchestrator | 13:48:40.606 STDOUT terraform:  } 2025-05-14 13:48:40.607074 | orchestrator | 13:48:40.607 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.607111 | orchestrator | 13:48:40.607 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.607163 | orchestrator | 13:48:40.607 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-14 13:48:40.607235 | orchestrator | 13:48:40.607 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.607275 | orchestrator | 13:48:40.607 STDOUT terraform:  } 2025-05-14 13:48:40.607313 | orchestrator | 13:48:40.607 STDOUT terraform:  } 2025-05-14 13:48:40.607437 | orchestrator | 13:48:40.607 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-14 13:48:40.607536 | orchestrator | 13:48:40.607 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.607616 | orchestrator | 13:48:40.607 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.607690 | orchestrator | 13:48:40.607 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.607761 | orchestrator | 13:48:40.607 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.607834 | orchestrator | 13:48:40.607 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.607908 | orchestrator | 13:48:40.607 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.607985 | orchestrator | 13:48:40.607 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.608059 | orchestrator | 13:48:40.608 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.608135 | orchestrator | 13:48:40.608 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.608210 | orchestrator | 13:48:40.608 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.608281 | orchestrator | 13:48:40.608 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.608353 | orchestrator | 13:48:40.608 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.608437 | orchestrator | 13:48:40.608 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.608511 | orchestrator | 13:48:40.608 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.608585 | orchestrator | 13:48:40.608 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.608664 | orchestrator | 13:48:40.608 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.608737 | orchestrator | 13:48:40.608 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.608803 | orchestrator | 13:48:40.608 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.608872 | orchestrator | 13:48:40.608 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.608909 | orchestrator | 13:48:40.608 STDOUT terraform:  } 2025-05-14 13:48:40.608953 | orchestrator | 13:48:40.608 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.609012 | orchestrator | 13:48:40.608 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.609048 | orchestrator | 13:48:40.609 STDOUT terraform:  } 2025-05-14 13:48:40.609093 | orchestrator | 13:48:40.609 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.609153 | orchestrator | 13:48:40.609 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.609189 | orchestrator | 13:48:40.609 STDOUT terraform:  } 2025-05-14 13:48:40.609233 | orchestrator | 13:48:40.609 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612109 | orchestrator | 13:48:40.609 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.612166 | orchestrator | 13:48:40.609 STDOUT terraform:  } 2025-05-14 13:48:40.612174 | orchestrator | 13:48:40.609 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.612180 | orchestrator | 13:48:40.609 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.612187 | orchestrator | 13:48:40.609 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-14 13:48:40.612193 | orchestrator | 13:48:40.609 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.612199 | orchestrator | 13:48:40.609 STDOUT terraform:  } 2025-05-14 13:48:40.612205 | orchestrator | 13:48:40.609 STDOUT terraform:  } 2025-05-14 13:48:40.612211 | orchestrator | 13:48:40.609 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-14 13:48:40.612217 | orchestrator | 13:48:40.609 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.612223 | orchestrator | 13:48:40.609 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.612229 | orchestrator | 13:48:40.609 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.612235 | orchestrator | 13:48:40.609 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.612241 | orchestrator | 13:48:40.609 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.612247 | orchestrator | 13:48:40.609 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.612253 | orchestrator | 13:48:40.609 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.612259 | orchestrator | 13:48:40.610 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.612264 | orchestrator | 13:48:40.610 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.612270 | orchestrator | 13:48:40.610 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.612276 | orchestrator | 13:48:40.610 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.612282 | orchestrator | 13:48:40.610 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.612299 | orchestrator | 13:48:40.610 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.612305 | orchestrator | 13:48:40.610 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.612311 | orchestrator | 13:48:40.610 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.612316 | orchestrator | 13:48:40.610 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.612322 | orchestrator | 13:48:40.610 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.612328 | orchestrator | 13:48:40.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612334 | orchestrator | 13:48:40.610 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.612339 | orchestrator | 13:48:40.610 STDOUT terraform:  } 2025-05-14 13:48:40.612346 | orchestrator | 13:48:40.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612359 | orchestrator | 13:48:40.610 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.612366 | orchestrator | 13:48:40.610 STDOUT terraform:  } 2025-05-14 13:48:40.612402 | orchestrator | 13:48:40.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612409 | orchestrator | 13:48:40.610 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.612415 | orchestrator | 13:48:40.610 STDOUT terraform:  } 2025-05-14 13:48:40.612421 | orchestrator | 13:48:40.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612427 | orchestrator | 13:48:40.610 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.612433 | orchestrator | 13:48:40.610 STDOUT terraform:  } 2025-05-14 13:48:40.612439 | orchestrator | 13:48:40.610 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.612456 | orchestrator | 13:48:40.610 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.612462 | orchestrator | 13:48:40.610 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-14 13:48:40.612468 | orchestrator | 13:48:40.610 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.612474 | orchestrator | 13:48:40.611 STDOUT terraform:  } 2025-05-14 13:48:40.612480 | orchestrator | 13:48:40.611 STDOUT terraform:  } 2025-05-14 13:48:40.612486 | orchestrator | 13:48:40.611 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-14 13:48:40.612492 | orchestrator | 13:48:40.611 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 13:48:40.612498 | orchestrator | 13:48:40.611 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.612503 | orchestrator | 13:48:40.611 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 13:48:40.612509 | orchestrator | 13:48:40.611 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 13:48:40.612515 | orchestrator | 13:48:40.611 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.612521 | orchestrator | 13:48:40.611 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 13:48:40.612530 | orchestrator | 13:48:40.611 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 13:48:40.612541 | orchestrator | 13:48:40.611 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 13:48:40.612547 | orchestrator | 13:48:40.611 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 13:48:40.612552 | orchestrator | 13:48:40.611 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.612558 | orchestrator | 13:48:40.611 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 13:48:40.612564 | orchestrator | 13:48:40.611 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.612570 | orchestrator | 13:48:40.611 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 13:48:40.612575 | orchestrator | 13:48:40.611 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 13:48:40.612581 | orchestrator | 13:48:40.611 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.612587 | orchestrator | 13:48:40.611 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 13:48:40.612593 | orchestrator | 13:48:40.612 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.612598 | orchestrator | 13:48:40.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612604 | orchestrator | 13:48:40.612 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 13:48:40.612610 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612616 | orchestrator | 13:48:40.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612622 | orchestrator | 13:48:40.612 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 13:48:40.612627 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612633 | orchestrator | 13:48:40.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612639 | orchestrator | 13:48:40.612 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 13:48:40.612645 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612650 | orchestrator | 13:48:40.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 13:48:40.612656 | orchestrator | 13:48:40.612 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 13:48:40.612662 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612668 | orchestrator | 13:48:40.612 STDOUT terraform:  + binding (known after apply) 2025-05-14 13:48:40.612676 | orchestrator | 13:48:40.612 STDOUT terraform:  + fixed_ip { 2025-05-14 13:48:40.612682 | orchestrator | 13:48:40.612 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-14 13:48:40.612688 | orchestrator | 13:48:40.612 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.612694 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612700 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.612706 | orchestrator | 13:48:40.612 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-14 13:48:40.612714 | orchestrator | 13:48:40.612 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-14 13:48:40.612764 | orchestrator | 13:48:40.612 STDOUT terraform:  + force_destroy = false 2025-05-14 13:48:40.612789 | orchestrator | 13:48:40.612 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.612963 | orchestrator | 13:48:40.612 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 13:48:40.613033 | orchestrator | 13:48:40.612 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.613048 | orchestrator | 13:48:40.612 STDOUT terraform:  + router_id = (known after apply) 2025-05-14 13:48:40.613069 | orchestrator | 13:48:40.612 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 13:48:40.613081 | orchestrator | 13:48:40.612 STDOUT terraform:  } 2025-05-14 13:48:40.613092 | orchestrator | 13:48:40.612 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-14 13:48:40.613121 | orchestrator | 13:48:40.613 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-14 13:48:40.613136 | orchestrator | 13:48:40.613 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 13:48:40.613281 | orchestrator | 13:48:40.613 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.613304 | orchestrator | 13:48:40.613 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 13:48:40.613312 | orchestrator | 13:48:40.613 STDOUT terraform:  + "nova", 2025-05-14 13:48:40.613322 | orchestrator | 13:48:40.613 STDOUT terraform:  ] 2025-05-14 13:48:40.613328 | orchestrator | 13:48:40.613 STDOUT terraform:  + distributed = (known after apply) 2025-05-14 13:48:40.613446 | orchestrator | 13:48:40.613 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-14 13:48:40.613474 | orchestrator | 13:48:40.613 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-14 13:48:40.613542 | orchestrator | 13:48:40.613 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.613589 | orchestrator | 13:48:40.613 STDOUT terraform:  + name = "testbed" 2025-05-14 13:48:40.613650 | orchestrator | 13:48:40.613 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.613709 | orchestrator | 13:48:40.613 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.613759 | orchestrator | 13:48:40.613 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-14 13:48:40.613768 | orchestrator | 13:48:40.613 STDOUT terraform:  } 2025-05-14 13:48:40.613873 | orchestrator | 13:48:40.613 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-14 13:48:40.613962 | orchestrator | 13:48:40.613 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-14 13:48:40.613998 | orchestrator | 13:48:40.613 STDOUT terraform:  + description = "ssh" 2025-05-14 13:48:40.614057 | orchestrator | 13:48:40.613 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.614091 | orchestrator | 13:48:40.614 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.614141 | orchestrator | 13:48:40.614 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.614174 | orchestrator | 13:48:40.614 STDOUT terraform:  + port_range_max = 22 2025-05-14 13:48:40.614207 | orchestrator | 13:48:40.614 STDOUT terraform:  + port_range_min = 22 2025-05-14 13:48:40.614239 | orchestrator | 13:48:40.614 STDOUT terraform:  + protocol = "tcp" 2025-05-14 13:48:40.614288 | orchestrator | 13:48:40.614 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.614335 | orchestrator | 13:48:40.614 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.614375 | orchestrator | 13:48:40.614 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.614449 | orchestrator | 13:48:40.614 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.614484 | orchestrator | 13:48:40.614 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.614493 | orchestrator | 13:48:40.614 STDOUT terraform:  } 2025-05-14 13:48:40.614582 | orchestrator | 13:48:40.614 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-14 13:48:40.614670 | orchestrator | 13:48:40.614 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-14 13:48:40.614709 | orchestrator | 13:48:40.614 STDOUT terraform:  + description = "wireguard" 2025-05-14 13:48:40.614749 | orchestrator | 13:48:40.614 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.614782 | orchestrator | 13:48:40.614 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.614831 | orchestrator | 13:48:40.614 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.614863 | orchestrator | 13:48:40.614 STDOUT terraform:  + port_range_max = 51820 2025-05-14 13:48:40.614895 | orchestrator | 13:48:40.614 STDOUT terraform:  + port_range_min = 51820 2025-05-14 13:48:40.614928 | orchestrator | 13:48:40.614 STDOUT terraform:  + protocol = "udp" 2025-05-14 13:48:40.614978 | orchestrator | 13:48:40.614 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.615026 | orchestrator | 13:48:40.614 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.615066 | orchestrator | 13:48:40.615 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.615114 | orchestrator | 13:48:40.615 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.615162 | orchestrator | 13:48:40.615 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.615170 | orchestrator | 13:48:40.615 STDOUT terraform:  } 2025-05-14 13:48:40.615263 | orchestrator | 13:48:40.615 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-14 13:48:40.615349 | orchestrator | 13:48:40.615 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-14 13:48:40.615441 | orchestrator | 13:48:40.615 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.615452 | orchestrator | 13:48:40.615 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.615499 | orchestrator | 13:48:40.615 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.615532 | orchestrator | 13:48:40.615 STDOUT terraform:  + protocol = "tcp" 2025-05-14 13:48:40.615583 | orchestrator | 13:48:40.615 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.615632 | orchestrator | 13:48:40.615 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.615681 | orchestrator | 13:48:40.615 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 13:48:40.615730 | orchestrator | 13:48:40.615 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.615777 | orchestrator | 13:48:40.615 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.615785 | orchestrator | 13:48:40.615 STDOUT terraform:  } 2025-05-14 13:48:40.615878 | orchestrator | 13:48:40.615 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-14 13:48:40.615965 | orchestrator | 13:48:40.615 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-14 13:48:40.616003 | orchestrator | 13:48:40.615 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.616037 | orchestrator | 13:48:40.615 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.616091 | orchestrator | 13:48:40.616 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.616115 | orchestrator | 13:48:40.616 STDOUT terraform:  + protocol = "udp" 2025-05-14 13:48:40.616164 | orchestrator | 13:48:40.616 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.616213 | orchestrator | 13:48:40.616 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.616259 | orchestrator | 13:48:40.616 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 13:48:40.616308 | orchestrator | 13:48:40.616 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.616356 | orchestrator | 13:48:40.616 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.616364 | orchestrator | 13:48:40.616 STDOUT terraform:  } 2025-05-14 13:48:40.616469 | orchestrator | 13:48:40.616 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-14 13:48:40.616554 | orchestrator | 13:48:40.616 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-14 13:48:40.616592 | orchestrator | 13:48:40.616 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.616626 | orchestrator | 13:48:40.616 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.616675 | orchestrator | 13:48:40.616 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.616712 | orchestrator | 13:48:40.616 STDOUT terraform:  + protocol = "icmp" 2025-05-14 13:48:40.616759 | orchestrator | 13:48:40.616 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.616808 | orchestrator | 13:48:40.616 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.616848 | orchestrator | 13:48:40.616 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.616906 | orchestrator | 13:48:40.616 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.616959 | orchestrator | 13:48:40.616 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.616967 | orchestrator | 13:48:40.616 STDOUT terraform:  } 2025-05-14 13:48:40.617056 | orchestrator | 13:48:40.616 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-14 13:48:40.617146 | orchestrator | 13:48:40.617 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-14 13:48:40.617182 | orchestrator | 13:48:40.617 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.617216 | orchestrator | 13:48:40.617 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.617265 | orchestrator | 13:48:40.617 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.617299 | orchestrator | 13:48:40.617 STDOUT terraform:  + protocol = "tcp" 2025-05-14 13:48:40.617349 | orchestrator | 13:48:40.617 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.617503 | orchestrator | 13:48:40.617 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.617542 | orchestrator | 13:48:40.617 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.617563 | orchestrator | 13:48:40.617 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.617576 | orchestrator | 13:48:40.617 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.617587 | orchestrator | 13:48:40.617 STDOUT terraform:  } 2025-05-14 13:48:40.617635 | orchestrator | 13:48:40.617 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-14 13:48:40.617718 | orchestrator | 13:48:40.617 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-14 13:48:40.617735 | orchestrator | 13:48:40.617 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.617784 | orchestrator | 13:48:40.617 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.617833 | orchestrator | 13:48:40.617 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.617849 | orchestrator | 13:48:40.617 STDOUT terraform:  + protocol = "udp" 2025-05-14 13:48:40.617912 | orchestrator | 13:48:40.617 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.617952 | orchestrator | 13:48:40.617 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.617991 | orchestrator | 13:48:40.617 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.618048 | orchestrator | 13:48:40.617 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.618103 | orchestrator | 13:48:40.618 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.618119 | orchestrator | 13:48:40.618 STDOUT terraform:  } 2025-05-14 13:48:40.618203 | orchestrator | 13:48:40.618 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-14 13:48:40.618288 | orchestrator | 13:48:40.618 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-14 13:48:40.618339 | orchestrator | 13:48:40.618 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.618356 | orchestrator | 13:48:40.618 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.618428 | orchestrator | 13:48:40.618 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.618465 | orchestrator | 13:48:40.618 STDOUT terraform:  + protocol = "icmp" 2025-05-14 13:48:40.618516 | orchestrator | 13:48:40.618 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.618533 | orchestrator | 13:48:40.618 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.618587 | orchestrator | 13:48:40.618 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.618637 | orchestrator | 13:48:40.618 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.618687 | orchestrator | 13:48:40.618 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.618703 | orchestrator | 13:48:40.618 STDOUT terraform:  } 2025-05-14 13:48:40.618783 | orchestrator | 13:48:40.618 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-14 13:48:40.618866 | orchestrator | 13:48:40.618 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-14 13:48:40.618884 | orchestrator | 13:48:40.618 STDOUT terraform:  + description = "vrrp" 2025-05-14 13:48:40.618938 | orchestrator | 13:48:40.618 STDOUT terraform:  + direction = "ingress" 2025-05-14 13:48:40.618955 | orchestrator | 13:48:40.618 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 13:48:40.619010 | orchestrator | 13:48:40.618 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.619027 | orchestrator | 13:48:40.618 STDOUT terraform:  + protocol = "112" 2025-05-14 13:48:40.619091 | orchestrator | 13:48:40.619 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.619134 | orchestrator | 13:48:40.619 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 13:48:40.619155 | orchestrator | 13:48:40.619 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 13:48:40.619215 | orchestrator | 13:48:40.619 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 13:48:40.619255 | orchestrator | 13:48:40.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.619270 | orchestrator | 13:48:40.619 STDOUT terraform:  } 2025-05-14 13:48:40.619341 | orchestrator | 13:48:40.619 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-14 13:48:40.619466 | orchestrator | 13:48:40.619 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-14 13:48:40.619494 | orchestrator | 13:48:40.619 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.619563 | orchestrator | 13:48:40.619 STDOUT terraform:  + description = "management security group" 2025-05-14 13:48:40.619604 | orchestrator | 13:48:40.619 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.619643 | orchestrator | 13:48:40.619 STDOUT terraform:  + name = "testbed-management" 2025-05-14 13:48:40.619692 | orchestrator | 13:48:40.619 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.619722 | orchestrator | 13:48:40.619 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 13:48:40.619779 | orchestrator | 13:48:40.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.619807 | orchestrator | 13:48:40.619 STDOUT terraform:  } 2025-05-14 13:48:40.619871 | orchestrator | 13:48:40.619 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-14 13:48:40.619948 | orchestrator | 13:48:40.619 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-14 13:48:40.619987 | orchestrator | 13:48:40.619 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.620026 | orchestrator | 13:48:40.619 STDOUT terraform:  + description = "node security group" 2025-05-14 13:48:40.620076 | orchestrator | 13:48:40.620 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.620126 | orchestrator | 13:48:40.620 STDOUT terraform:  + name = "testbed-node" 2025-05-14 13:48:40.620141 | orchestrator | 13:48:40.620 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.620202 | orchestrator | 13:48:40.620 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 13:48:40.620241 | orchestrator | 13:48:40.620 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.620263 | orchestrator | 13:48:40.620 STDOUT terraform:  } 2025-05-14 13:48:40.620336 | orchestrator | 13:48:40.620 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-14 13:48:40.620448 | orchestrator | 13:48:40.620 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-14 13:48:40.620472 | orchestrator | 13:48:40.620 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 13:48:40.620529 | orchestrator | 13:48:40.620 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-14 13:48:40.620545 | orchestrator | 13:48:40.620 STDOUT terraform:  + dns_nameservers = [ 2025-05-14 13:48:40.620561 | orchestrator | 13:48:40.620 STDOUT terraform:  + "8.8.8.8", 2025-05-14 13:48:40.620601 | orchestrator | 13:48:40.620 STDOUT terraform:  + "9.9.9.9", 2025-05-14 13:48:40.620614 | orchestrator | 13:48:40.620 STDOUT terraform:  ] 2025-05-14 13:48:40.620629 | orchestrator | 13:48:40.620 STDOUT terraform:  + enable_dhcp = true 2025-05-14 13:48:40.620687 | orchestrator | 13:48:40.620 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-14 13:48:40.620727 | orchestrator | 13:48:40.620 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.620743 | orchestrator | 13:48:40.620 STDOUT terraform:  + ip_version = 4 2025-05-14 13:48:40.620794 | orchestrator | 13:48:40.620 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-14 13:48:40.620833 | orchestrator | 13:48:40.620 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-14 13:48:40.620890 | orchestrator | 13:48:40.620 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-14 13:48:40.620928 | orchestrator | 13:48:40.620 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 13:48:40.620944 | orchestrator | 13:48:40.620 STDOUT terraform:  + no_gateway = false 2025-05-14 13:48:40.620999 | orchestrator | 13:48:40.620 STDOUT terraform:  + region = (known after apply) 2025-05-14 13:48:40.621038 | orchestrator | 13:48:40.620 STDOUT terraform:  + service_types = (known after apply) 2025-05-14 13:48:40.621081 | orchestrator | 13:48:40.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 13:48:40.621106 | orchestrator | 13:48:40.621 STDOUT terraform:  + allocation_pool { 2025-05-14 13:48:40.621121 | orchestrator | 13:48:40.621 STDOUT terraform:  + end = "192.168.31.250" 2025-05-14 13:48:40.621161 | orchestrator | 13:48:40.621 STDOUT terraform:  + start = "192.168.31.200" 2025-05-14 13:48:40.621177 | orchestrator | 13:48:40.621 STDOUT terraform:  } 2025-05-14 13:48:40.621189 | orchestrator | 13:48:40.621 STDOUT terraform:  } 2025-05-14 13:48:40.621238 | orchestrator | 13:48:40.621 STDOUT terraform:  # terraform_data.image will be created 2025-05-14 13:48:40.621254 | orchestrator | 13:48:40.621 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-14 13:48:40.621302 | orchestrator | 13:48:40.621 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.621317 | orchestrator | 13:48:40.621 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 13:48:40.621332 | orchestrator | 13:48:40.621 STDOUT terraform:  + output = (known after apply) 2025-05-14 13:48:40.621346 | orchestrator | 13:48:40.621 STDOUT terraform:  } 2025-05-14 13:48:40.621431 | orchestrator | 13:48:40.621 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-14 13:48:40.621450 | orchestrator | 13:48:40.621 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-14 13:48:40.621464 | orchestrator | 13:48:40.621 STDOUT terraform:  + id = (known after apply) 2025-05-14 13:48:40.621503 | orchestrator | 13:48:40.621 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 13:48:40.621519 | orchestrator | 13:48:40.621 STDOUT terraform:  + output = (known after apply) 2025-05-14 13:48:40.621534 | orchestrator | 13:48:40.621 STDOUT terraform:  } 2025-05-14 13:48:40.621661 | orchestrator | 13:48:40.621 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-14 13:48:40.621688 | orchestrator | 13:48:40.621 STDOUT terraform: Changes to Outputs: 2025-05-14 13:48:40.621696 | orchestrator | 13:48:40.621 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-14 13:48:40.621705 | orchestrator | 13:48:40.621 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 13:48:40.806222 | orchestrator | 13:48:40.805 STDOUT terraform: terraform_data.image: Creating... 2025-05-14 13:48:40.806347 | orchestrator | 13:48:40.805 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-14 13:48:40.806433 | orchestrator | 13:48:40.806 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=00e429da-1216-5f0d-020a-0f679e3d59d9] 2025-05-14 13:48:40.806487 | orchestrator | 13:48:40.806 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=bb73dd8e-c632-a4f3-2d10-973ffd27b3bd] 2025-05-14 13:48:40.817573 | orchestrator | 13:48:40.817 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-14 13:48:40.818005 | orchestrator | 13:48:40.817 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-14 13:48:40.823894 | orchestrator | 13:48:40.823 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-14 13:48:40.829091 | orchestrator | 13:48:40.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-14 13:48:40.829272 | orchestrator | 13:48:40.829 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-14 13:48:40.830148 | orchestrator | 13:48:40.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-14 13:48:40.830304 | orchestrator | 13:48:40.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-14 13:48:40.830460 | orchestrator | 13:48:40.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-14 13:48:40.833447 | orchestrator | 13:48:40.833 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-14 13:48:40.835610 | orchestrator | 13:48:40.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-14 13:48:41.298922 | orchestrator | 13:48:41.298 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 13:48:41.301473 | orchestrator | 13:48:41.300 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 13:48:41.306726 | orchestrator | 13:48:41.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-14 13:48:41.311849 | orchestrator | 13:48:41.311 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-14 13:48:41.648806 | orchestrator | 13:48:41.648 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-14 13:48:41.657569 | orchestrator | 13:48:41.657 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-14 13:48:46.866330 | orchestrator | 13:48:46.865 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=5e1e0f02-a397-4052-a6db-b1b487978615] 2025-05-14 13:48:46.880008 | orchestrator | 13:48:46.879 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-14 13:48:50.831325 | orchestrator | 13:48:50.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-14 13:48:50.831494 | orchestrator | 13:48:50.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-14 13:48:50.831515 | orchestrator | 13:48:50.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-14 13:48:50.831675 | orchestrator | 13:48:50.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-14 13:48:50.831804 | orchestrator | 13:48:50.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-14 13:48:50.836492 | orchestrator | 13:48:50.836 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-14 13:48:51.307519 | orchestrator | 13:48:51.307 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-14 13:48:51.312564 | orchestrator | 13:48:51.312 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-14 13:48:51.471606 | orchestrator | 13:48:51.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=1515eacf-7c8c-4c61-b2e2-7b383c3e44c1] 2025-05-14 13:48:51.481136 | orchestrator | 13:48:51.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=40b8d6d7-4545-465c-9849-c8d6aa81e9b4] 2025-05-14 13:48:51.488253 | orchestrator | 13:48:51.486 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-14 13:48:51.490429 | orchestrator | 13:48:51.490 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194] 2025-05-14 13:48:51.490561 | orchestrator | 13:48:51.490 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-14 13:48:51.496807 | orchestrator | 13:48:51.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-14 13:48:51.516900 | orchestrator | 13:48:51.516 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=2969d5d4-6b61-4174-959d-91757001b3d4] 2025-05-14 13:48:51.522143 | orchestrator | 13:48:51.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-14 13:48:51.528986 | orchestrator | 13:48:51.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=3506369f-dad3-424e-bb0e-001afa60c640] 2025-05-14 13:48:51.537142 | orchestrator | 13:48:51.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-14 13:48:51.578421 | orchestrator | 13:48:51.578 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=01187494-c8f8-452b-8a71-7cb0e866cd7e] 2025-05-14 13:48:51.583105 | orchestrator | 13:48:51.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=60bd9cea-a91d-498b-bf8e-aa0954da2728] 2025-05-14 13:48:51.587114 | orchestrator | 13:48:51.586 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-14 13:48:51.599223 | orchestrator | 13:48:51.599 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-14 13:48:51.601614 | orchestrator | 13:48:51.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=0e7ca56e-ad5f-44b1-a048-99cbd42b26bb] 2025-05-14 13:48:51.602788 | orchestrator | 13:48:51.602 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=69098cc8d30a86b8405c0f41868a37122b45a2cc] 2025-05-14 13:48:51.608177 | orchestrator | 13:48:51.607 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-14 13:48:51.608568 | orchestrator | 13:48:51.608 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-14 13:48:51.614115 | orchestrator | 13:48:51.613 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=128a638324e79139af4ce50bd30c6946b2eecc4e] 2025-05-14 13:48:51.658204 | orchestrator | 13:48:51.657 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-14 13:48:51.854128 | orchestrator | 13:48:51.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=7e927c4f-d02c-4f8e-99e1-94b2128e93eb] 2025-05-14 13:48:56.880683 | orchestrator | 13:48:56.880 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-14 13:48:57.214431 | orchestrator | 13:48:57.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=19369105-bfec-4360-a374-d2f34f1753a0] 2025-05-14 13:48:57.455668 | orchestrator | 13:48:57.455 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=0aa98b62-6a13-476f-a3c3-cc4f2cdec5c4] 2025-05-14 13:48:57.465569 | orchestrator | 13:48:57.465 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-14 13:49:01.488338 | orchestrator | 13:49:01.488 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-14 13:49:01.493471 | orchestrator | 13:49:01.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-14 13:49:01.497656 | orchestrator | 13:49:01.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 13:49:01.523089 | orchestrator | 13:49:01.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-14 13:49:01.538366 | orchestrator | 13:49:01.538 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-14 13:49:01.589035 | orchestrator | 13:49:01.588 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 13:49:01.905300 | orchestrator | 13:49:01.904 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=b49ba43d-495c-4e97-94d5-24ddaafe687f] 2025-05-14 13:49:01.913630 | orchestrator | 13:49:01.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=1d5e4da1-02cf-44be-b84d-bc2f28a5f03a] 2025-05-14 13:49:01.921447 | orchestrator | 13:49:01.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=71c7cd21-af9a-43c7-833e-47c0c8f8b580] 2025-05-14 13:49:02.008464 | orchestrator | 13:49:02.007 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=b71e7922-b678-4f73-a76a-9c385d8067f2] 2025-05-14 13:49:02.016635 | orchestrator | 13:49:02.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=301a6f0d-44e1-4338-b25e-44cbfe2d08d8] 2025-05-14 13:49:02.021955 | orchestrator | 13:49:02.021 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=cc66d0de-7c12-4033-bf6f-0152012bc9df] 2025-05-14 13:49:04.507592 | orchestrator | 13:49:04.507 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=0d44b608-0b03-4c08-99de-cf1c2363596b] 2025-05-14 13:49:04.515072 | orchestrator | 13:49:04.514 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-14 13:49:04.522208 | orchestrator | 13:49:04.521 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-14 13:49:04.525347 | orchestrator | 13:49:04.525 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-14 13:49:04.672695 | orchestrator | 13:49:04.672 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=dab2aa44-b662-4ebe-93bf-f2dbf39ea794] 2025-05-14 13:49:04.689057 | orchestrator | 13:49:04.688 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-14 13:49:04.689494 | orchestrator | 13:49:04.689 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-14 13:49:04.691310 | orchestrator | 13:49:04.691 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=33f8fa23-c0e2-434d-969c-847112aef098] 2025-05-14 13:49:04.691766 | orchestrator | 13:49:04.691 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-14 13:49:04.696626 | orchestrator | 13:49:04.696 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-14 13:49:04.698045 | orchestrator | 13:49:04.697 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-14 13:49:04.698281 | orchestrator | 13:49:04.698 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-14 13:49:04.698971 | orchestrator | 13:49:04.698 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-14 13:49:04.704483 | orchestrator | 13:49:04.704 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-14 13:49:04.714064 | orchestrator | 13:49:04.713 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-14 13:49:04.910451 | orchestrator | 13:49:04.909 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=4f2ca797-02fc-4ddb-b2bd-13498acc9df8] 2025-05-14 13:49:04.923531 | orchestrator | 13:49:04.923 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-14 13:49:05.117052 | orchestrator | 13:49:05.116 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d887b0a2-ba3a-4263-83eb-5dceace12598] 2025-05-14 13:49:05.124394 | orchestrator | 13:49:05.124 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-14 13:49:05.243786 | orchestrator | 13:49:05.243 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=7a75e9d4-0802-4577-b508-46a1fbbeed20] 2025-05-14 13:49:05.258483 | orchestrator | 13:49:05.258 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-14 13:49:05.370875 | orchestrator | 13:49:05.370 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=49efdfb4-16dd-4d0a-b7cf-878568f436c9] 2025-05-14 13:49:05.375603 | orchestrator | 13:49:05.375 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-14 13:49:05.449899 | orchestrator | 13:49:05.449 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=92d37e7d-7131-424a-a827-4906f5c6ff1d] 2025-05-14 13:49:05.455972 | orchestrator | 13:49:05.455 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-14 13:49:05.840376 | orchestrator | 13:49:05.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=5d278d0d-34fa-4498-9cec-f6563f65b099] 2025-05-14 13:49:05.849003 | orchestrator | 13:49:05.848 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-14 13:49:05.970525 | orchestrator | 13:49:05.970 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=831f5487-900d-43ca-860f-fb671e4c0f4a] 2025-05-14 13:49:05.979715 | orchestrator | 13:49:05.979 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-14 13:49:06.085993 | orchestrator | 13:49:06.085 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=4954e9c4-3330-4bf9-971d-7e41e69b2f8a] 2025-05-14 13:49:06.228551 | orchestrator | 13:49:06.228 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1a5f7a5c-c215-4a1c-80f1-5db0e34c7f6b] 2025-05-14 13:49:10.738313 | orchestrator | 13:49:10.737 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=1dc0e347-5ef9-44ef-aa29-42701dc9cbdc] 2025-05-14 13:49:10.984361 | orchestrator | 13:49:10.984 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3160603b-26c0-4eeb-8e26-f8b8b6688fa2] 2025-05-14 13:49:11.051194 | orchestrator | 13:49:11.050 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=6d6368c8-9838-42eb-a019-2bae1ad2f1ed] 2025-05-14 13:49:11.283616 | orchestrator | 13:49:11.283 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=6f77814e-7b9d-4a16-adfd-169051e872c3] 2025-05-14 13:49:11.311275 | orchestrator | 13:49:11.310 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=577b0f7e-50e4-4080-9e74-2b4f598d74f6] 2025-05-14 13:49:11.514658 | orchestrator | 13:49:11.514 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=06e94f6f-74eb-49d2-8833-0aef71bc1e1d] 2025-05-14 13:49:11.583642 | orchestrator | 13:49:11.583 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 7s [id=3121acda-0209-4c96-a4c7-f833102098e0] 2025-05-14 13:49:11.789415 | orchestrator | 13:49:11.788 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=89747cff-f919-4104-ab59-eccc5d760a64] 2025-05-14 13:49:11.818900 | orchestrator | 13:49:11.814 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-14 13:49:11.825980 | orchestrator | 13:49:11.825 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-14 13:49:11.833092 | orchestrator | 13:49:11.832 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-14 13:49:11.840993 | orchestrator | 13:49:11.839 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-14 13:49:11.841259 | orchestrator | 13:49:11.841 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-14 13:49:11.843042 | orchestrator | 13:49:11.842 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-14 13:49:11.845962 | orchestrator | 13:49:11.845 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-14 13:49:19.421432 | orchestrator | 13:49:19.421 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=c008f4f4-6fab-4a51-b333-109d97873ded] 2025-05-14 13:49:19.428247 | orchestrator | 13:49:19.427 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-14 13:49:19.439121 | orchestrator | 13:49:19.439 STDOUT terraform: local_file.inventory: Creating... 2025-05-14 13:49:19.439898 | orchestrator | 13:49:19.439 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-14 13:49:19.443079 | orchestrator | 13:49:19.442 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=fc98aac5bfa68f03dedc9686e6cdb9dd9c379ecd] 2025-05-14 13:49:19.447183 | orchestrator | 13:49:19.447 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1e37200253f63174f1968df0de9ee08064530f91] 2025-05-14 13:49:20.017503 | orchestrator | 13:49:20.017 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c008f4f4-6fab-4a51-b333-109d97873ded] 2025-05-14 13:49:21.827092 | orchestrator | 13:49:21.826 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-14 13:49:21.838435 | orchestrator | 13:49:21.838 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-14 13:49:21.841717 | orchestrator | 13:49:21.841 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-14 13:49:21.842898 | orchestrator | 13:49:21.842 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-14 13:49:21.847332 | orchestrator | 13:49:21.847 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-14 13:49:21.847468 | orchestrator | 13:49:21.847 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-14 13:49:31.827177 | orchestrator | 13:49:31.826 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-14 13:49:31.839414 | orchestrator | 13:49:31.839 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-14 13:49:31.842772 | orchestrator | 13:49:31.842 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-14 13:49:31.843865 | orchestrator | 13:49:31.843 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-14 13:49:31.848145 | orchestrator | 13:49:31.847 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-14 13:49:31.848297 | orchestrator | 13:49:31.848 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-14 13:49:41.828142 | orchestrator | 13:49:41.827 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-14 13:49:41.840048 | orchestrator | 13:49:41.839 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-14 13:49:41.843236 | orchestrator | 13:49:41.843 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-14 13:49:41.844322 | orchestrator | 13:49:41.844 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-14 13:49:41.848578 | orchestrator | 13:49:41.848 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-14 13:49:41.848699 | orchestrator | 13:49:41.848 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-14 13:49:42.192439 | orchestrator | 13:49:42.192 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=df7cd4d3-b4f0-47ca-b86e-f9510654be70] 2025-05-14 13:49:42.411451 | orchestrator | 13:49:42.411 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=66687d89-c1c3-4116-8d2b-7b97605e54a3] 2025-05-14 13:49:42.481468 | orchestrator | 13:49:42.481 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=01795db3-31ad-4ec1-a4c7-8631a487eae2] 2025-05-14 13:49:42.553037 | orchestrator | 13:49:42.552 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=4df9e079-7405-4b67-88bd-22c51b7018c6] 2025-05-14 13:49:51.840819 | orchestrator | 13:49:51.840 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-05-14 13:49:51.849344 | orchestrator | 13:49:51.849 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-05-14 13:49:53.225473 | orchestrator | 13:49:53.225 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=10b87a26-5eb6-472b-8db0-f8ad3e2fdb68] 2025-05-14 13:49:53.546543 | orchestrator | 13:49:53.546 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=9ebbe42c-d42a-4387-adbe-841aa1349460] 2025-05-14 13:49:53.571572 | orchestrator | 13:49:53.571 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-14 13:49:53.583619 | orchestrator | 13:49:53.583 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-14 13:49:53.584970 | orchestrator | 13:49:53.584 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5956640382900154127] 2025-05-14 13:49:53.585025 | orchestrator | 13:49:53.584 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-14 13:49:53.585418 | orchestrator | 13:49:53.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-14 13:49:53.585495 | orchestrator | 13:49:53.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-14 13:49:53.586123 | orchestrator | 13:49:53.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-14 13:49:53.586157 | orchestrator | 13:49:53.586 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-14 13:49:53.593292 | orchestrator | 13:49:53.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-14 13:49:53.596641 | orchestrator | 13:49:53.596 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-14 13:49:53.610217 | orchestrator | 13:49:53.610 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-14 13:49:53.617201 | orchestrator | 13:49:53.617 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-14 13:49:58.926193 | orchestrator | 13:49:58.925 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=4df9e079-7405-4b67-88bd-22c51b7018c6/7e927c4f-d02c-4f8e-99e1-94b2128e93eb] 2025-05-14 13:49:58.935334 | orchestrator | 13:49:58.935 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=10b87a26-5eb6-472b-8db0-f8ad3e2fdb68/ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194] 2025-05-14 13:49:58.966352 | orchestrator | 13:49:58.965 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=66687d89-c1c3-4116-8d2b-7b97605e54a3/40b8d6d7-4545-465c-9849-c8d6aa81e9b4] 2025-05-14 13:49:58.982172 | orchestrator | 13:49:58.981 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=4df9e079-7405-4b67-88bd-22c51b7018c6/0e7ca56e-ad5f-44b1-a048-99cbd42b26bb] 2025-05-14 13:49:59.012451 | orchestrator | 13:49:59.012 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=10b87a26-5eb6-472b-8db0-f8ad3e2fdb68/60bd9cea-a91d-498b-bf8e-aa0954da2728] 2025-05-14 13:49:59.021506 | orchestrator | 13:49:59.021 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=66687d89-c1c3-4116-8d2b-7b97605e54a3/01187494-c8f8-452b-8a71-7cb0e866cd7e] 2025-05-14 13:49:59.044839 | orchestrator | 13:49:59.044 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=4df9e079-7405-4b67-88bd-22c51b7018c6/3506369f-dad3-424e-bb0e-001afa60c640] 2025-05-14 13:49:59.060804 | orchestrator | 13:49:59.060 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=10b87a26-5eb6-472b-8db0-f8ad3e2fdb68/1515eacf-7c8c-4c61-b2e2-7b383c3e44c1] 2025-05-14 13:49:59.080844 | orchestrator | 13:49:59.080 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=66687d89-c1c3-4116-8d2b-7b97605e54a3/2969d5d4-6b61-4174-959d-91757001b3d4] 2025-05-14 13:50:03.618721 | orchestrator | 13:50:03.618 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-14 13:50:13.623946 | orchestrator | 13:50:13.623 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-14 13:50:14.336160 | orchestrator | 13:50:14.335 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=d90764ec-4eb0-4c4f-98db-a45363d13ab4] 2025-05-14 13:50:14.365287 | orchestrator | 13:50:14.364 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-14 13:50:14.365434 | orchestrator | 13:50:14.364 STDOUT terraform: Outputs: 2025-05-14 13:50:14.365465 | orchestrator | 13:50:14.365 STDOUT terraform: manager_address = 2025-05-14 13:50:14.365481 | orchestrator | 13:50:14.365 STDOUT terraform: private_key = 2025-05-14 13:50:14.857422 | orchestrator | ok: Runtime: 0:01:46.912693 2025-05-14 13:50:14.896205 | 2025-05-14 13:50:14.896355 | TASK [Fetch manager address] 2025-05-14 13:50:15.371221 | orchestrator | ok 2025-05-14 13:50:15.380991 | 2025-05-14 13:50:15.381125 | TASK [Set manager_host address] 2025-05-14 13:50:15.462147 | orchestrator | ok 2025-05-14 13:50:15.472506 | 2025-05-14 13:50:15.472723 | LOOP [Update ansible collections] 2025-05-14 13:50:17.783992 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 13:50:17.784443 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 13:50:17.784516 | orchestrator | Starting galaxy collection install process 2025-05-14 13:50:17.784581 | orchestrator | Process install dependency map 2025-05-14 13:50:17.784622 | orchestrator | Starting collection install process 2025-05-14 13:50:17.784658 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-05-14 13:50:17.784717 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-05-14 13:50:17.784774 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-14 13:50:17.784865 | orchestrator | ok: Item: commons Runtime: 0:00:01.998410 2025-05-14 13:50:18.521394 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 13:50:18.521536 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 13:50:18.521582 | orchestrator | Starting galaxy collection install process 2025-05-14 13:50:18.521606 | orchestrator | Process install dependency map 2025-05-14 13:50:18.521627 | orchestrator | Starting collection install process 2025-05-14 13:50:18.521647 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-05-14 13:50:18.521668 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-05-14 13:50:18.521688 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-14 13:50:18.521723 | orchestrator | ok: Item: services Runtime: 0:00:00.506602 2025-05-14 13:50:18.540902 | 2025-05-14 13:50:18.541056 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 13:50:29.078646 | orchestrator | ok 2025-05-14 13:50:29.089672 | 2025-05-14 13:50:29.089835 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 13:51:29.139314 | orchestrator | ok 2025-05-14 13:51:29.150006 | 2025-05-14 13:51:29.150155 | TASK [Fetch manager ssh hostkey] 2025-05-14 13:51:30.728729 | orchestrator | Output suppressed because no_log was given 2025-05-14 13:51:30.745811 | 2025-05-14 13:51:30.746001 | TASK [Get ssh keypair from terraform environment] 2025-05-14 13:51:31.284162 | orchestrator | ok: Runtime: 0:00:00.008436 2025-05-14 13:51:31.305202 | 2025-05-14 13:51:31.305455 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 13:51:31.353417 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-14 13:51:31.362744 | 2025-05-14 13:51:31.362908 | TASK [Run manager part 0] 2025-05-14 13:51:32.367763 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 13:51:32.437735 | orchestrator | 2025-05-14 13:51:32.437830 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-14 13:51:32.437848 | orchestrator | 2025-05-14 13:51:32.437878 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-14 13:51:34.100337 | orchestrator | ok: [testbed-manager] 2025-05-14 13:51:34.100439 | orchestrator | 2025-05-14 13:51:34.100500 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 13:51:34.100523 | orchestrator | 2025-05-14 13:51:34.100545 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 13:51:35.973041 | orchestrator | ok: [testbed-manager] 2025-05-14 13:51:35.973079 | orchestrator | 2025-05-14 13:51:35.973085 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 13:51:36.659864 | orchestrator | ok: [testbed-manager] 2025-05-14 13:51:36.659932 | orchestrator | 2025-05-14 13:51:36.659949 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 13:51:36.712188 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.712223 | orchestrator | 2025-05-14 13:51:36.712231 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-14 13:51:36.741276 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.741311 | orchestrator | 2025-05-14 13:51:36.741317 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 13:51:36.765129 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.765160 | orchestrator | 2025-05-14 13:51:36.765165 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 13:51:36.786850 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.786882 | orchestrator | 2025-05-14 13:51:36.786887 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 13:51:36.809986 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.810036 | orchestrator | 2025-05-14 13:51:36.810043 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-14 13:51:36.833802 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.833832 | orchestrator | 2025-05-14 13:51:36.833838 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-14 13:51:36.860202 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:51:36.860236 | orchestrator | 2025-05-14 13:51:36.860243 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-14 13:51:37.643911 | orchestrator | changed: [testbed-manager] 2025-05-14 13:51:37.643961 | orchestrator | 2025-05-14 13:51:37.643970 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-14 13:54:39.832531 | orchestrator | changed: [testbed-manager] 2025-05-14 13:54:39.832611 | orchestrator | 2025-05-14 13:54:39.832630 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 13:55:52.567406 | orchestrator | changed: [testbed-manager] 2025-05-14 13:55:52.568119 | orchestrator | 2025-05-14 13:55:52.568137 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 13:56:11.291868 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:11.291948 | orchestrator | 2025-05-14 13:56:11.291967 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 13:56:19.597666 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:19.597732 | orchestrator | 2025-05-14 13:56:19.597747 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 13:56:19.645764 | orchestrator | ok: [testbed-manager] 2025-05-14 13:56:19.645836 | orchestrator | 2025-05-14 13:56:19.645850 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-14 13:56:20.421086 | orchestrator | ok: [testbed-manager] 2025-05-14 13:56:20.421176 | orchestrator | 2025-05-14 13:56:20.421194 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-14 13:56:21.141768 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:21.141856 | orchestrator | 2025-05-14 13:56:21.141873 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-14 13:56:27.465760 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:27.465876 | orchestrator | 2025-05-14 13:56:27.465933 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-14 13:56:33.322060 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:33.322171 | orchestrator | 2025-05-14 13:56:33.322203 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-14 13:56:35.885960 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:35.886080 | orchestrator | 2025-05-14 13:56:35.886100 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-14 13:56:37.639620 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:37.639707 | orchestrator | 2025-05-14 13:56:37.639721 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-14 13:56:38.763661 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 13:56:38.763753 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 13:56:38.763767 | orchestrator | 2025-05-14 13:56:38.763780 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-14 13:56:38.808524 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 13:56:38.808598 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 13:56:38.808611 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 13:56:38.808622 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 13:56:43.484364 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 13:56:43.484461 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 13:56:43.484475 | orchestrator | 2025-05-14 13:56:43.484488 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-14 13:56:44.170492 | orchestrator | changed: [testbed-manager] 2025-05-14 13:56:44.170601 | orchestrator | 2025-05-14 13:56:44.170614 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-14 13:57:02.912755 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-14 13:57:02.912858 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-14 13:57:02.912876 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-14 13:57:02.912888 | orchestrator | 2025-05-14 13:57:02.912901 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-14 13:57:05.305424 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-14 13:57:05.305539 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-14 13:57:05.305554 | orchestrator | 2025-05-14 13:57:05.305567 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-14 13:57:05.305579 | orchestrator | 2025-05-14 13:57:05.305590 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 13:57:06.659327 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:06.659411 | orchestrator | 2025-05-14 13:57:06.659427 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 13:57:06.704077 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:06.704143 | orchestrator | 2025-05-14 13:57:06.704157 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 13:57:06.776109 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:06.776186 | orchestrator | 2025-05-14 13:57:06.776200 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 13:57:07.521400 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:07.521612 | orchestrator | 2025-05-14 13:57:07.521633 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 13:57:08.236124 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:08.236906 | orchestrator | 2025-05-14 13:57:08.236934 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 13:57:09.610531 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-14 13:57:09.610599 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-14 13:57:09.610614 | orchestrator | 2025-05-14 13:57:09.610640 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 13:57:10.960879 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:10.960965 | orchestrator | 2025-05-14 13:57:10.960982 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 13:57:12.686401 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 13:57:12.686478 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-14 13:57:12.686490 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-14 13:57:12.686522 | orchestrator | 2025-05-14 13:57:12.686533 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 13:57:13.244070 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:13.244841 | orchestrator | 2025-05-14 13:57:13.244872 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 13:57:13.358347 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:13.358438 | orchestrator | 2025-05-14 13:57:13.358456 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 13:57:14.227446 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 13:57:14.227601 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:14.227627 | orchestrator | 2025-05-14 13:57:14.227647 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 13:57:14.265061 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:14.265147 | orchestrator | 2025-05-14 13:57:14.265162 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 13:57:14.303791 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:14.303870 | orchestrator | 2025-05-14 13:57:14.303885 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 13:57:14.337295 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:14.337374 | orchestrator | 2025-05-14 13:57:14.337388 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 13:57:14.382584 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:14.382669 | orchestrator | 2025-05-14 13:57:14.382684 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 13:57:15.126337 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:15.126422 | orchestrator | 2025-05-14 13:57:15.126437 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 13:57:15.126449 | orchestrator | 2025-05-14 13:57:15.126462 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 13:57:16.504204 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:16.504273 | orchestrator | 2025-05-14 13:57:16.504285 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-14 13:57:17.453997 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:17.454053 | orchestrator | 2025-05-14 13:57:17.454059 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 13:57:17.454064 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-14 13:57:17.454069 | orchestrator | 2025-05-14 13:57:17.621031 | orchestrator | ok: Runtime: 0:05:45.907775 2025-05-14 13:57:17.632995 | 2025-05-14 13:57:17.633126 | TASK [Point out that the log in on the manager is now possible] 2025-05-14 13:57:17.677974 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-14 13:57:17.687263 | 2025-05-14 13:57:17.687387 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 13:57:17.721868 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-14 13:57:17.730236 | 2025-05-14 13:57:17.730351 | TASK [Run manager part 1 + 2] 2025-05-14 13:57:18.631312 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 13:57:18.689281 | orchestrator | 2025-05-14 13:57:18.689369 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-14 13:57:18.689388 | orchestrator | 2025-05-14 13:57:18.689418 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 13:57:21.164932 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:21.165022 | orchestrator | 2025-05-14 13:57:21.165088 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 13:57:21.201256 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:21.201334 | orchestrator | 2025-05-14 13:57:21.201352 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 13:57:21.243611 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:21.280124 | orchestrator | 2025-05-14 13:57:21.280181 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 13:57:21.302260 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:21.310212 | orchestrator | 2025-05-14 13:57:21.310246 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 13:57:21.380352 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:21.380412 | orchestrator | 2025-05-14 13:57:21.380424 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 13:57:21.451120 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:21.451221 | orchestrator | 2025-05-14 13:57:21.451239 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 13:57:21.505304 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-14 13:57:21.505384 | orchestrator | 2025-05-14 13:57:21.505400 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 13:57:22.236318 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:22.236400 | orchestrator | 2025-05-14 13:57:22.236419 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 13:57:22.287640 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:22.287694 | orchestrator | 2025-05-14 13:57:22.287704 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 13:57:23.644675 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:23.644773 | orchestrator | 2025-05-14 13:57:23.644794 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 13:57:24.232081 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:24.232170 | orchestrator | 2025-05-14 13:57:24.232187 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 13:57:25.398267 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:25.398409 | orchestrator | 2025-05-14 13:57:25.398429 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 13:57:38.705125 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:38.705218 | orchestrator | 2025-05-14 13:57:38.705234 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 13:57:39.368385 | orchestrator | ok: [testbed-manager] 2025-05-14 13:57:39.369108 | orchestrator | 2025-05-14 13:57:39.369144 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 13:57:39.425815 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:39.425895 | orchestrator | 2025-05-14 13:57:39.425909 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-14 13:57:40.415117 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:40.415196 | orchestrator | 2025-05-14 13:57:40.415211 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-14 13:57:41.414183 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:41.414271 | orchestrator | 2025-05-14 13:57:41.414288 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-14 13:57:41.973280 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:41.973348 | orchestrator | 2025-05-14 13:57:41.973363 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-14 13:57:42.019587 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 13:57:42.019675 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 13:57:42.019690 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 13:57:42.019702 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 13:57:44.801891 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:44.802058 | orchestrator | 2025-05-14 13:57:44.802078 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-14 13:57:53.704898 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-14 13:57:53.704993 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-14 13:57:53.705012 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-14 13:57:53.705024 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-14 13:57:53.705036 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-14 13:57:53.705047 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-14 13:57:53.705058 | orchestrator | 2025-05-14 13:57:53.705071 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-14 13:57:54.752425 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:54.752558 | orchestrator | 2025-05-14 13:57:54.752578 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-14 13:57:54.795320 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:54.795384 | orchestrator | 2025-05-14 13:57:54.795394 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-14 13:57:57.807993 | orchestrator | changed: [testbed-manager] 2025-05-14 13:57:57.808884 | orchestrator | 2025-05-14 13:57:57.808908 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-14 13:57:57.851457 | orchestrator | skipping: [testbed-manager] 2025-05-14 13:57:57.851531 | orchestrator | 2025-05-14 13:57:57.851543 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-14 13:59:27.989086 | orchestrator | changed: [testbed-manager] 2025-05-14 13:59:27.989250 | orchestrator | 2025-05-14 13:59:27.989276 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 13:59:29.574111 | orchestrator | ok: [testbed-manager] 2025-05-14 13:59:29.574198 | orchestrator | 2025-05-14 13:59:29.574213 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 13:59:29.574227 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-14 13:59:29.574238 | orchestrator | 2025-05-14 13:59:29.857242 | orchestrator | ok: Runtime: 0:02:11.636299 2025-05-14 13:59:29.868639 | 2025-05-14 13:59:29.868769 | TASK [Reboot manager] 2025-05-14 13:59:31.410600 | orchestrator | ok: Runtime: 0:00:00.934317 2025-05-14 13:59:31.427266 | 2025-05-14 13:59:31.427426 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 13:59:46.582003 | orchestrator | ok 2025-05-14 13:59:46.592054 | 2025-05-14 13:59:46.592203 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 14:00:46.637193 | orchestrator | ok 2025-05-14 14:00:46.649255 | 2025-05-14 14:00:46.649399 | TASK [Deploy manager + bootstrap nodes] 2025-05-14 14:00:49.061320 | orchestrator | 2025-05-14 14:00:49.061447 | orchestrator | # DEPLOY MANAGER 2025-05-14 14:00:49.061458 | orchestrator | 2025-05-14 14:00:49.061463 | orchestrator | + set -e 2025-05-14 14:00:49.061468 | orchestrator | + echo 2025-05-14 14:00:49.061474 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-14 14:00:49.061479 | orchestrator | + echo 2025-05-14 14:00:49.061536 | orchestrator | + cat /opt/manager-vars.sh 2025-05-14 14:00:49.064469 | orchestrator | export NUMBER_OF_NODES=6 2025-05-14 14:00:49.064486 | orchestrator | 2025-05-14 14:00:49.064518 | orchestrator | export CEPH_VERSION=reef 2025-05-14 14:00:49.064524 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-14 14:00:49.064529 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-14 14:00:49.064539 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-14 14:00:49.064542 | orchestrator | 2025-05-14 14:00:49.064550 | orchestrator | export ARA=false 2025-05-14 14:00:49.064554 | orchestrator | export TEMPEST=false 2025-05-14 14:00:49.064561 | orchestrator | export IS_ZUUL=true 2025-05-14 14:00:49.064565 | orchestrator | 2025-05-14 14:00:49.064572 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:00:49.064578 | orchestrator | export EXTERNAL_API=false 2025-05-14 14:00:49.064582 | orchestrator | 2025-05-14 14:00:49.064590 | orchestrator | export IMAGE_USER=ubuntu 2025-05-14 14:00:49.064594 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-14 14:00:49.064598 | orchestrator | 2025-05-14 14:00:49.064604 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-14 14:00:49.064753 | orchestrator | 2025-05-14 14:00:49.064763 | orchestrator | + echo 2025-05-14 14:00:49.064768 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 14:00:49.065714 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 14:00:49.065724 | orchestrator | ++ INTERACTIVE=false 2025-05-14 14:00:49.065728 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 14:00:49.065731 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 14:00:49.065917 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 14:00:49.065924 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 14:00:49.065928 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 14:00:49.065934 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 14:00:49.065938 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 14:00:49.065988 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 14:00:49.065994 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 14:00:49.065998 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 14:00:49.066002 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 14:00:49.066006 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 14:00:49.066010 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 14:00:49.066050 | orchestrator | ++ export ARA=false 2025-05-14 14:00:49.066054 | orchestrator | ++ ARA=false 2025-05-14 14:00:49.066065 | orchestrator | ++ export TEMPEST=false 2025-05-14 14:00:49.066069 | orchestrator | ++ TEMPEST=false 2025-05-14 14:00:49.066087 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 14:00:49.066091 | orchestrator | ++ IS_ZUUL=true 2025-05-14 14:00:49.066095 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:00:49.066099 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:00:49.066105 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 14:00:49.066109 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 14:00:49.066113 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 14:00:49.066116 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 14:00:49.066120 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 14:00:49.066124 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 14:00:49.066128 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 14:00:49.066132 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 14:00:49.066180 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-14 14:00:49.120410 | orchestrator | + docker version 2025-05-14 14:00:49.375778 | orchestrator | Client: Docker Engine - Community 2025-05-14 14:00:49.375855 | orchestrator | Version: 26.1.4 2025-05-14 14:00:49.375864 | orchestrator | API version: 1.45 2025-05-14 14:00:49.375869 | orchestrator | Go version: go1.21.11 2025-05-14 14:00:49.375873 | orchestrator | Git commit: 5650f9b 2025-05-14 14:00:49.375877 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 14:00:49.375883 | orchestrator | OS/Arch: linux/amd64 2025-05-14 14:00:49.375887 | orchestrator | Context: default 2025-05-14 14:00:49.375891 | orchestrator | 2025-05-14 14:00:49.375895 | orchestrator | Server: Docker Engine - Community 2025-05-14 14:00:49.375899 | orchestrator | Engine: 2025-05-14 14:00:49.375903 | orchestrator | Version: 26.1.4 2025-05-14 14:00:49.375907 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-14 14:00:49.375911 | orchestrator | Go version: go1.21.11 2025-05-14 14:00:49.375915 | orchestrator | Git commit: de5c9cf 2025-05-14 14:00:49.375941 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 14:00:49.375945 | orchestrator | OS/Arch: linux/amd64 2025-05-14 14:00:49.375949 | orchestrator | Experimental: false 2025-05-14 14:00:49.375953 | orchestrator | containerd: 2025-05-14 14:00:49.375965 | orchestrator | Version: 1.7.27 2025-05-14 14:00:49.375969 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-14 14:00:49.375973 | orchestrator | runc: 2025-05-14 14:00:49.375977 | orchestrator | Version: 1.2.5 2025-05-14 14:00:49.375981 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-14 14:00:49.375985 | orchestrator | docker-init: 2025-05-14 14:00:49.375989 | orchestrator | Version: 0.19.0 2025-05-14 14:00:49.375993 | orchestrator | GitCommit: de40ad0 2025-05-14 14:00:49.378919 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-14 14:00:49.387645 | orchestrator | + set -e 2025-05-14 14:00:49.387658 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 14:00:49.387662 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 14:00:49.387666 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 14:00:49.387670 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 14:00:49.387674 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 14:00:49.387678 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 14:00:49.387682 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 14:00:49.387686 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 14:00:49.387690 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 14:00:49.387718 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 14:00:49.387723 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 14:00:49.387728 | orchestrator | ++ export ARA=false 2025-05-14 14:00:49.387732 | orchestrator | ++ ARA=false 2025-05-14 14:00:49.387735 | orchestrator | ++ export TEMPEST=false 2025-05-14 14:00:49.387739 | orchestrator | ++ TEMPEST=false 2025-05-14 14:00:49.387743 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 14:00:49.387747 | orchestrator | ++ IS_ZUUL=true 2025-05-14 14:00:49.387750 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:00:49.387755 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:00:49.387759 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 14:00:49.387762 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 14:00:49.387766 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 14:00:49.387770 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 14:00:49.387774 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 14:00:49.387778 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 14:00:49.387781 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 14:00:49.387785 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 14:00:49.387789 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 14:00:49.387793 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 14:00:49.387796 | orchestrator | ++ INTERACTIVE=false 2025-05-14 14:00:49.387800 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 14:00:49.387804 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 14:00:49.388111 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 14:00:49.388116 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-14 14:00:49.394939 | orchestrator | + set -e 2025-05-14 14:00:49.394949 | orchestrator | + VERSION=8.1.0 2025-05-14 14:00:49.394955 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-14 14:00:49.401886 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 14:00:49.401905 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 14:00:49.406822 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 14:00:49.411551 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-14 14:00:49.419353 | orchestrator | /opt/configuration ~ 2025-05-14 14:00:49.419368 | orchestrator | + set -e 2025-05-14 14:00:49.419373 | orchestrator | + pushd /opt/configuration 2025-05-14 14:00:49.419377 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 14:00:49.420691 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 14:00:49.422531 | orchestrator | ++ deactivate nondestructive 2025-05-14 14:00:49.422540 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:49.422544 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:49.422548 | orchestrator | ++ hash -r 2025-05-14 14:00:49.422552 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:49.422556 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 14:00:49.422560 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 14:00:49.422565 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 14:00:49.422586 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 14:00:49.422590 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 14:00:49.422595 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 14:00:49.422599 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 14:00:49.422603 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:00:49.422607 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:00:49.422611 | orchestrator | ++ export PATH 2025-05-14 14:00:49.422615 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:49.422619 | orchestrator | ++ '[' -z '' ']' 2025-05-14 14:00:49.422622 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 14:00:49.422626 | orchestrator | ++ PS1='(venv) ' 2025-05-14 14:00:49.422630 | orchestrator | ++ export PS1 2025-05-14 14:00:49.422634 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 14:00:49.422638 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 14:00:49.422641 | orchestrator | ++ hash -r 2025-05-14 14:00:49.422656 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-14 14:00:50.506198 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-14 14:00:50.506765 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-14 14:00:50.508172 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-14 14:00:50.509401 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-14 14:00:50.510535 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-14 14:00:50.520642 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.0) 2025-05-14 14:00:50.522128 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-14 14:00:50.523155 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-14 14:00:50.524438 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-14 14:00:50.553961 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-14 14:00:50.555379 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-14 14:00:50.556935 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-14 14:00:50.558434 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-14 14:00:50.562419 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-14 14:00:50.767950 | orchestrator | ++ which gilt 2025-05-14 14:00:50.771367 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-14 14:00:50.771381 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-14 14:00:50.971093 | orchestrator | osism.cfg-generics: 2025-05-14 14:00:50.971169 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-14 14:00:52.367364 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-14 14:00:52.367484 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-14 14:00:52.367785 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-14 14:00:52.367854 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-14 14:00:53.269506 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-14 14:00:53.280144 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-14 14:00:53.584565 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-14 14:00:53.639602 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 14:00:53.639655 | orchestrator | + deactivate 2025-05-14 14:00:53.639661 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 14:00:53.639668 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:00:53.639672 | orchestrator | + export PATH 2025-05-14 14:00:53.639676 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 14:00:53.639687 | orchestrator | ~ 2025-05-14 14:00:53.639692 | orchestrator | + '[' -n '' ']' 2025-05-14 14:00:53.639696 | orchestrator | + hash -r 2025-05-14 14:00:53.639700 | orchestrator | + '[' -n '' ']' 2025-05-14 14:00:53.639703 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 14:00:53.639707 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 14:00:53.639711 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 14:00:53.639715 | orchestrator | + unset -f deactivate 2025-05-14 14:00:53.639719 | orchestrator | + popd 2025-05-14 14:00:53.642192 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 14:00:53.642276 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-14 14:00:53.642619 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 14:00:53.701714 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 14:00:53.701772 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-14 14:00:53.701785 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-14 14:00:53.740400 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 14:00:53.740469 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 14:00:53.740520 | orchestrator | ++ deactivate nondestructive 2025-05-14 14:00:53.740672 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:53.740680 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:53.740684 | orchestrator | ++ hash -r 2025-05-14 14:00:53.740782 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:53.740789 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 14:00:53.740910 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 14:00:53.740918 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 14:00:53.741106 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 14:00:53.741114 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 14:00:53.741118 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 14:00:53.741122 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 14:00:53.741201 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:00:53.741405 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:00:53.741412 | orchestrator | ++ export PATH 2025-05-14 14:00:53.741417 | orchestrator | ++ '[' -n '' ']' 2025-05-14 14:00:53.741549 | orchestrator | ++ '[' -z '' ']' 2025-05-14 14:00:53.741557 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 14:00:53.741561 | orchestrator | ++ PS1='(venv) ' 2025-05-14 14:00:53.741683 | orchestrator | ++ export PS1 2025-05-14 14:00:53.741689 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 14:00:53.741693 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 14:00:53.741696 | orchestrator | ++ hash -r 2025-05-14 14:00:53.741842 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-14 14:00:54.865201 | orchestrator | 2025-05-14 14:00:54.865341 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-14 14:00:54.865359 | orchestrator | 2025-05-14 14:00:54.865372 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 14:00:55.403314 | orchestrator | ok: [testbed-manager] 2025-05-14 14:00:55.403412 | orchestrator | 2025-05-14 14:00:55.403427 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 14:00:56.424826 | orchestrator | changed: [testbed-manager] 2025-05-14 14:00:56.424928 | orchestrator | 2025-05-14 14:00:56.424944 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-14 14:00:56.424956 | orchestrator | 2025-05-14 14:00:56.424968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 14:00:58.664718 | orchestrator | ok: [testbed-manager] 2025-05-14 14:00:58.664840 | orchestrator | 2025-05-14 14:00:58.664856 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-14 14:01:03.587892 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-14 14:01:03.587998 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-14 14:01:03.588013 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-14 14:01:03.588025 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-14 14:01:03.588036 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-14 14:01:03.588051 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-14 14:01:03.588064 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-14 14:01:03.588077 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-14 14:01:03.588089 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-14 14:01:03.588100 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-14 14:01:03.588111 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-14 14:01:03.588123 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-14 14:01:03.588134 | orchestrator | 2025-05-14 14:01:03.588146 | orchestrator | TASK [Check status] ************************************************************ 2025-05-14 14:02:19.490738 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 14:02:19.490860 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 14:02:19.490876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-14 14:02:19.490888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-14 14:02:19.490912 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j895402486697.1589', 'results_file': '/home/dragon/.ansible_async/j895402486697.1589', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.490932 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j623388814427.1614', 'results_file': '/home/dragon/.ansible_async/j623388814427.1614', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.490954 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 14:02:19.490971 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 14:02:19.490983 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j102117805146.1639', 'results_file': '/home/dragon/.ansible_async/j102117805146.1639', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.490995 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j554835465458.1671', 'results_file': '/home/dragon/.ansible_async/j554835465458.1671', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491007 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j249642523049.1703', 'results_file': '/home/dragon/.ansible_async/j249642523049.1703', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491019 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j222019976254.1743', 'results_file': '/home/dragon/.ansible_async/j222019976254.1743', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491029 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 14:02:19.491068 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j230450119730.1775', 'results_file': '/home/dragon/.ansible_async/j230450119730.1775', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491081 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j616344338586.1800', 'results_file': '/home/dragon/.ansible_async/j616344338586.1800', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491092 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j499024385998.1834', 'results_file': '/home/dragon/.ansible_async/j499024385998.1834', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491103 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j688236103104.1866', 'results_file': '/home/dragon/.ansible_async/j688236103104.1866', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491116 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j506023300013.1907', 'results_file': '/home/dragon/.ansible_async/j506023300013.1907', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491169 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j83603174246.1933', 'results_file': '/home/dragon/.ansible_async/j83603174246.1933', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-14 14:02:19.491183 | orchestrator | 2025-05-14 14:02:19.491195 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-14 14:02:19.542836 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:19.542929 | orchestrator | 2025-05-14 14:02:19.542945 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-14 14:02:20.183104 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:20.183212 | orchestrator | 2025-05-14 14:02:20.183227 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-14 14:02:20.552230 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:20.552370 | orchestrator | 2025-05-14 14:02:20.552397 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 14:02:20.887327 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:20.887536 | orchestrator | 2025-05-14 14:02:20.887557 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-14 14:02:20.941093 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:02:20.941190 | orchestrator | 2025-05-14 14:02:20.941203 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-14 14:02:21.272323 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:21.272480 | orchestrator | 2025-05-14 14:02:21.272513 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-14 14:02:21.390645 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:02:21.390749 | orchestrator | 2025-05-14 14:02:21.390765 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-14 14:02:21.390777 | orchestrator | 2025-05-14 14:02:21.390789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 14:02:23.198283 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:23.198393 | orchestrator | 2025-05-14 14:02:23.198409 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-14 14:02:23.299634 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-14 14:02:23.299722 | orchestrator | 2025-05-14 14:02:23.299736 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-14 14:02:23.353213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-14 14:02:23.353305 | orchestrator | 2025-05-14 14:02:23.353320 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-14 14:02:24.415002 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-14 14:02:24.415134 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-14 14:02:24.415150 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-14 14:02:24.415162 | orchestrator | 2025-05-14 14:02:24.415175 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-14 14:02:26.199374 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-14 14:02:26.199616 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-14 14:02:26.199644 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-14 14:02:26.199664 | orchestrator | 2025-05-14 14:02:26.199683 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-14 14:02:26.850670 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:26.850764 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:26.850774 | orchestrator | 2025-05-14 14:02:26.850802 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-14 14:02:27.446595 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:27.446689 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:27.446700 | orchestrator | 2025-05-14 14:02:27.446708 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-14 14:02:27.501929 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:02:27.502009 | orchestrator | 2025-05-14 14:02:27.502049 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-14 14:02:27.855224 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:27.855318 | orchestrator | 2025-05-14 14:02:27.855330 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-14 14:02:27.911750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-14 14:02:27.911839 | orchestrator | 2025-05-14 14:02:27.911852 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-14 14:02:28.945028 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:28.945136 | orchestrator | 2025-05-14 14:02:28.945151 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-14 14:02:29.756840 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:29.756969 | orchestrator | 2025-05-14 14:02:29.756985 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-14 14:02:32.985306 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:32.985393 | orchestrator | 2025-05-14 14:02:32.985401 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-14 14:02:33.116862 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-14 14:02:33.116956 | orchestrator | 2025-05-14 14:02:33.116970 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-14 14:02:33.175941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 14:02:33.176034 | orchestrator | 2025-05-14 14:02:33.176048 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-14 14:02:35.658192 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:35.658275 | orchestrator | 2025-05-14 14:02:35.658285 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 14:02:35.771121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-14 14:02:35.771216 | orchestrator | 2025-05-14 14:02:35.771229 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-14 14:02:36.847838 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-14 14:02:36.847951 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-14 14:02:36.847967 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-14 14:02:36.848010 | orchestrator | 2025-05-14 14:02:36.848023 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-14 14:02:36.905892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-14 14:02:36.905991 | orchestrator | 2025-05-14 14:02:36.906006 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-14 14:02:37.554914 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-14 14:02:37.555024 | orchestrator | 2025-05-14 14:02:37.555042 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-14 14:02:38.221441 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:38.221549 | orchestrator | 2025-05-14 14:02:38.221563 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 14:02:38.842852 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:38.842953 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:38.842973 | orchestrator | 2025-05-14 14:02:38.842987 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-14 14:02:39.249228 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:39.249307 | orchestrator | 2025-05-14 14:02:39.249314 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-14 14:02:39.578898 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:39.579001 | orchestrator | 2025-05-14 14:02:39.579017 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-14 14:02:39.622543 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:02:39.622640 | orchestrator | 2025-05-14 14:02:39.622654 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-14 14:02:40.250256 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:40.250363 | orchestrator | 2025-05-14 14:02:40.250380 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 14:02:40.326965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-14 14:02:40.327041 | orchestrator | 2025-05-14 14:02:40.327048 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-14 14:02:41.061908 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-14 14:02:41.062073 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-14 14:02:41.062092 | orchestrator | 2025-05-14 14:02:41.062106 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-14 14:02:41.721031 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-14 14:02:41.721137 | orchestrator | 2025-05-14 14:02:41.721153 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-14 14:02:42.336886 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:42.336984 | orchestrator | 2025-05-14 14:02:42.336999 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-14 14:02:42.388490 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:02:42.388584 | orchestrator | 2025-05-14 14:02:42.388594 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-14 14:02:43.019591 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:43.019671 | orchestrator | 2025-05-14 14:02:43.019678 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 14:02:44.802706 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:44.802794 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:44.802802 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:02:44.802809 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:44.802816 | orchestrator | 2025-05-14 14:02:44.802822 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-14 14:02:50.594867 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-14 14:02:50.594986 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-14 14:02:50.595004 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-14 14:02:50.595016 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-14 14:02:50.595058 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-14 14:02:50.595070 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-14 14:02:50.595081 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-14 14:02:50.595108 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-14 14:02:50.595121 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-14 14:02:50.595133 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-14 14:02:50.595145 | orchestrator | 2025-05-14 14:02:50.595157 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-14 14:02:51.232145 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-14 14:02:51.232228 | orchestrator | 2025-05-14 14:02:51.232238 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-14 14:02:51.302445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-14 14:02:51.302540 | orchestrator | 2025-05-14 14:02:51.302554 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-14 14:02:51.998715 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:51.998791 | orchestrator | 2025-05-14 14:02:51.998798 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-14 14:02:52.605297 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:52.605462 | orchestrator | 2025-05-14 14:02:52.605480 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-14 14:02:53.331661 | orchestrator | changed: [testbed-manager] 2025-05-14 14:02:53.331765 | orchestrator | 2025-05-14 14:02:53.331781 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-14 14:02:55.673895 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:55.674008 | orchestrator | 2025-05-14 14:02:55.674090 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-14 14:02:56.574444 | orchestrator | ok: [testbed-manager] 2025-05-14 14:02:56.574546 | orchestrator | 2025-05-14 14:02:56.574562 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-14 14:03:18.754635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-14 14:03:18.754756 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:18.754773 | orchestrator | 2025-05-14 14:03:18.754786 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-14 14:03:18.815533 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:18.815594 | orchestrator | 2025-05-14 14:03:18.815606 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-14 14:03:18.815618 | orchestrator | 2025-05-14 14:03:18.815629 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-14 14:03:18.858269 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:18.858339 | orchestrator | 2025-05-14 14:03:18.858353 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 14:03:18.917699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-14 14:03:18.917735 | orchestrator | 2025-05-14 14:03:18.917746 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-14 14:03:19.704197 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:19.704369 | orchestrator | 2025-05-14 14:03:19.704387 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-14 14:03:19.771183 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:19.771274 | orchestrator | 2025-05-14 14:03:19.771289 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-14 14:03:19.821384 | orchestrator | ok: [testbed-manager] => { 2025-05-14 14:03:19.821471 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-14 14:03:19.821488 | orchestrator | } 2025-05-14 14:03:19.821502 | orchestrator | 2025-05-14 14:03:19.821514 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-14 14:03:20.459372 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:20.459478 | orchestrator | 2025-05-14 14:03:20.459489 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-14 14:03:21.307379 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:21.307478 | orchestrator | 2025-05-14 14:03:21.307493 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-14 14:03:21.376566 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:21.376602 | orchestrator | 2025-05-14 14:03:21.376614 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-14 14:03:21.421368 | orchestrator | ok: [testbed-manager] => { 2025-05-14 14:03:21.421457 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-14 14:03:21.421470 | orchestrator | } 2025-05-14 14:03:21.421481 | orchestrator | 2025-05-14 14:03:21.421492 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-14 14:03:21.485173 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.485234 | orchestrator | 2025-05-14 14:03:21.485247 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-14 14:03:21.542879 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.542946 | orchestrator | 2025-05-14 14:03:21.542960 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-14 14:03:21.602005 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.602124 | orchestrator | 2025-05-14 14:03:21.602139 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-14 14:03:21.664146 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.664222 | orchestrator | 2025-05-14 14:03:21.664236 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-14 14:03:21.723071 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.723148 | orchestrator | 2025-05-14 14:03:21.723157 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-14 14:03:21.822890 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:03:21.822991 | orchestrator | 2025-05-14 14:03:21.823011 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 14:03:23.231563 | orchestrator | changed: [testbed-manager] 2025-05-14 14:03:23.231687 | orchestrator | 2025-05-14 14:03:23.231704 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-14 14:03:23.300427 | orchestrator | ok: [testbed-manager] 2025-05-14 14:03:23.300507 | orchestrator | 2025-05-14 14:03:23.300520 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-14 14:04:23.355011 | orchestrator | Pausing for 60 seconds 2025-05-14 14:04:23.355116 | orchestrator | changed: [testbed-manager] 2025-05-14 14:04:23.355132 | orchestrator | 2025-05-14 14:04:23.355146 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-14 14:04:23.397473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-14 14:04:23.397552 | orchestrator | 2025-05-14 14:04:23.397567 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-14 14:08:02.987359 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-14 14:08:02.987526 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-14 14:08:02.987557 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-14 14:08:02.987576 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-14 14:08:02.987594 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-14 14:08:02.987614 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-14 14:08:02.987633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-14 14:08:02.987652 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-14 14:08:02.987672 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-14 14:08:02.987727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-14 14:08:02.987747 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-14 14:08:02.987767 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-14 14:08:02.987788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-14 14:08:02.987840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-14 14:08:02.987863 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-14 14:08:02.987887 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-14 14:08:02.987907 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-14 14:08:02.987927 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-14 14:08:02.987946 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-14 14:08:02.987963 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-14 14:08:02.987977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-14 14:08:02.987990 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:02.988004 | orchestrator | 2025-05-14 14:08:02.988018 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-14 14:08:02.988030 | orchestrator | 2025-05-14 14:08:02.988043 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 14:08:05.084144 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:05.084251 | orchestrator | 2025-05-14 14:08:05.084267 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-14 14:08:05.188060 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-14 14:08:05.188156 | orchestrator | 2025-05-14 14:08:05.188171 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-14 14:08:05.249707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 14:08:05.249798 | orchestrator | 2025-05-14 14:08:05.249945 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-14 14:08:07.053066 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:07.053172 | orchestrator | 2025-05-14 14:08:07.053189 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-14 14:08:07.099181 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:07.099289 | orchestrator | 2025-05-14 14:08:07.099306 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-14 14:08:07.192560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-14 14:08:07.192657 | orchestrator | 2025-05-14 14:08:07.192670 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-14 14:08:09.958551 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-14 14:08:09.958673 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-14 14:08:09.958689 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-14 14:08:09.958702 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-14 14:08:09.958713 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-14 14:08:09.958724 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-14 14:08:09.958735 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-14 14:08:09.958746 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-14 14:08:09.958757 | orchestrator | 2025-05-14 14:08:09.958842 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-14 14:08:10.595320 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:10.595460 | orchestrator | 2025-05-14 14:08:10.595479 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-14 14:08:10.667514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-14 14:08:10.667623 | orchestrator | 2025-05-14 14:08:10.667638 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-14 14:08:11.846663 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-14 14:08:11.846758 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-14 14:08:11.846771 | orchestrator | 2025-05-14 14:08:11.846782 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-14 14:08:12.453062 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:12.453182 | orchestrator | 2025-05-14 14:08:12.453204 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-14 14:08:12.510886 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:08:12.510973 | orchestrator | 2025-05-14 14:08:12.510986 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-14 14:08:12.569047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-14 14:08:12.569141 | orchestrator | 2025-05-14 14:08:12.569156 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-14 14:08:13.889983 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:08:13.890150 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:08:13.890167 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:13.890180 | orchestrator | 2025-05-14 14:08:13.890192 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-14 14:08:14.486961 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:14.487073 | orchestrator | 2025-05-14 14:08:14.487111 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-14 14:08:14.595958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-14 14:08:14.596087 | orchestrator | 2025-05-14 14:08:14.596112 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-14 14:08:15.810675 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:08:15.810769 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:08:15.810778 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:15.810787 | orchestrator | 2025-05-14 14:08:15.810835 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-14 14:08:16.437958 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:16.438065 | orchestrator | 2025-05-14 14:08:16.438075 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-14 14:08:16.536092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-14 14:08:16.536185 | orchestrator | 2025-05-14 14:08:16.536200 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-14 14:08:17.116235 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:17.116335 | orchestrator | 2025-05-14 14:08:17.116349 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-14 14:08:17.511024 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:17.511146 | orchestrator | 2025-05-14 14:08:17.511175 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-14 14:08:18.687883 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-14 14:08:18.687991 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-14 14:08:18.688005 | orchestrator | 2025-05-14 14:08:18.688018 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-14 14:08:19.418327 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:19.418446 | orchestrator | 2025-05-14 14:08:19.418461 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-14 14:08:19.821454 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:19.821564 | orchestrator | 2025-05-14 14:08:19.821589 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-14 14:08:20.184521 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:20.184632 | orchestrator | 2025-05-14 14:08:20.184648 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-14 14:08:20.224007 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:08:20.224096 | orchestrator | 2025-05-14 14:08:20.224110 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-14 14:08:20.298961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-14 14:08:20.299063 | orchestrator | 2025-05-14 14:08:20.299078 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-14 14:08:20.336349 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:20.336429 | orchestrator | 2025-05-14 14:08:20.336443 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-14 14:08:22.321954 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-14 14:08:22.322149 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-14 14:08:22.322174 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-14 14:08:22.322191 | orchestrator | 2025-05-14 14:08:22.322208 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-14 14:08:23.010079 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:23.010187 | orchestrator | 2025-05-14 14:08:23.010204 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-14 14:08:23.723993 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:23.724100 | orchestrator | 2025-05-14 14:08:23.724118 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-14 14:08:24.424052 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:24.424181 | orchestrator | 2025-05-14 14:08:24.424197 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-14 14:08:24.502135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-14 14:08:24.502242 | orchestrator | 2025-05-14 14:08:24.502257 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-14 14:08:24.550881 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:24.550974 | orchestrator | 2025-05-14 14:08:24.550988 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-14 14:08:25.252053 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-14 14:08:25.252166 | orchestrator | 2025-05-14 14:08:25.252182 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-14 14:08:25.341282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-14 14:08:25.341375 | orchestrator | 2025-05-14 14:08:25.341388 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-14 14:08:26.029043 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:26.029147 | orchestrator | 2025-05-14 14:08:26.029164 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-14 14:08:26.641483 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:26.641573 | orchestrator | 2025-05-14 14:08:26.641584 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-14 14:08:26.682210 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:08:26.682327 | orchestrator | 2025-05-14 14:08:26.682351 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-14 14:08:26.746325 | orchestrator | ok: [testbed-manager] 2025-05-14 14:08:26.746423 | orchestrator | 2025-05-14 14:08:26.746437 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-14 14:08:27.548984 | orchestrator | changed: [testbed-manager] 2025-05-14 14:08:27.549100 | orchestrator | 2025-05-14 14:08:27.549117 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-14 14:09:06.833957 | orchestrator | changed: [testbed-manager] 2025-05-14 14:09:06.834172 | orchestrator | 2025-05-14 14:09:06.834212 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-14 14:09:07.514524 | orchestrator | ok: [testbed-manager] 2025-05-14 14:09:07.514635 | orchestrator | 2025-05-14 14:09:07.514652 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-14 14:09:10.307550 | orchestrator | changed: [testbed-manager] 2025-05-14 14:09:10.307645 | orchestrator | 2025-05-14 14:09:10.307655 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-14 14:09:10.389650 | orchestrator | ok: [testbed-manager] 2025-05-14 14:09:10.389820 | orchestrator | 2025-05-14 14:09:10.389837 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 14:09:10.389850 | orchestrator | 2025-05-14 14:09:10.389861 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-14 14:09:10.445917 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:09:10.446004 | orchestrator | 2025-05-14 14:09:10.446010 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-14 14:10:10.511648 | orchestrator | Pausing for 60 seconds 2025-05-14 14:10:10.511846 | orchestrator | changed: [testbed-manager] 2025-05-14 14:10:10.511861 | orchestrator | 2025-05-14 14:10:10.511871 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-14 14:10:15.995114 | orchestrator | changed: [testbed-manager] 2025-05-14 14:10:15.995243 | orchestrator | 2025-05-14 14:10:15.995260 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-14 14:10:57.752696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-14 14:10:57.752804 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-14 14:10:57.752816 | orchestrator | changed: [testbed-manager] 2025-05-14 14:10:57.752825 | orchestrator | 2025-05-14 14:10:57.752835 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-14 14:11:03.667386 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:03.667552 | orchestrator | 2025-05-14 14:11:03.667577 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-14 14:11:03.772870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-14 14:11:03.772956 | orchestrator | 2025-05-14 14:11:03.772971 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 14:11:03.772988 | orchestrator | 2025-05-14 14:11:03.773008 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-14 14:11:03.827841 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:11:03.827918 | orchestrator | 2025-05-14 14:11:03.827932 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:11:03.827945 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-14 14:11:03.827956 | orchestrator | 2025-05-14 14:11:03.926308 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 14:11:03.926390 | orchestrator | + deactivate 2025-05-14 14:11:03.926405 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 14:11:03.926418 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 14:11:03.926429 | orchestrator | + export PATH 2025-05-14 14:11:03.926440 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 14:11:03.926452 | orchestrator | + '[' -n '' ']' 2025-05-14 14:11:03.926463 | orchestrator | + hash -r 2025-05-14 14:11:03.926474 | orchestrator | + '[' -n '' ']' 2025-05-14 14:11:03.926484 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 14:11:03.926495 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 14:11:03.926506 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 14:11:03.926517 | orchestrator | + unset -f deactivate 2025-05-14 14:11:03.926529 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-14 14:11:03.932449 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 14:11:03.932525 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 14:11:03.932568 | orchestrator | + local max_attempts=60 2025-05-14 14:11:03.932581 | orchestrator | + local name=ceph-ansible 2025-05-14 14:11:03.932593 | orchestrator | + local attempt_num=1 2025-05-14 14:11:03.933432 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 14:11:03.968397 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:11:03.968499 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 14:11:03.968522 | orchestrator | + local max_attempts=60 2025-05-14 14:11:03.968543 | orchestrator | + local name=kolla-ansible 2025-05-14 14:11:03.968564 | orchestrator | + local attempt_num=1 2025-05-14 14:11:03.969380 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 14:11:03.999172 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:11:03.999270 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 14:11:03.999293 | orchestrator | + local max_attempts=60 2025-05-14 14:11:03.999313 | orchestrator | + local name=osism-ansible 2025-05-14 14:11:03.999331 | orchestrator | + local attempt_num=1 2025-05-14 14:11:03.999904 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 14:11:04.033164 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:11:04.033260 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 14:11:04.033280 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 14:11:04.694939 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 14:11:04.750346 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-14 14:11:04.750443 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 14:11:04.750455 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-14 14:11:04.945277 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 14:11:04.945372 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945384 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945392 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-14 14:11:04.945402 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-14 14:11:04.945429 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945443 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945452 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945460 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 49 seconds (healthy) 2025-05-14 14:11:04.945469 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945477 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-14 14:11:04.945486 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945515 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945524 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-14 14:11:04.945533 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945541 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945550 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.945558 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-14 14:11:04.952033 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-14 14:11:05.111578 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 14:11:05.111718 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-14 14:11:05.111735 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-14 14:11:05.111748 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-14 14:11:05.111761 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-14 14:11:05.117875 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 14:11:05.170756 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 14:11:05.170828 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-14 14:11:05.176436 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-14 14:11:06.827797 | orchestrator | 2025-05-14 14:11:06 | INFO  | Task a933fdd5-7809-4c3b-aa3b-a9972055c935 (resolvconf) was prepared for execution. 2025-05-14 14:11:06.827906 | orchestrator | 2025-05-14 14:11:06 | INFO  | It takes a moment until task a933fdd5-7809-4c3b-aa3b-a9972055c935 (resolvconf) has been started and output is visible here. 2025-05-14 14:11:09.829521 | orchestrator | 2025-05-14 14:11:09.830826 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-14 14:11:09.832958 | orchestrator | 2025-05-14 14:11:09.834099 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 14:11:09.835120 | orchestrator | Wednesday 14 May 2025 14:11:09 +0000 (0:00:00.087) 0:00:00.087 ********* 2025-05-14 14:11:13.801415 | orchestrator | ok: [testbed-manager] 2025-05-14 14:11:13.801537 | orchestrator | 2025-05-14 14:11:13.801773 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 14:11:13.804615 | orchestrator | Wednesday 14 May 2025 14:11:13 +0000 (0:00:03.975) 0:00:04.062 ********* 2025-05-14 14:11:13.854276 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:11:13.855202 | orchestrator | 2025-05-14 14:11:13.856144 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 14:11:13.856496 | orchestrator | Wednesday 14 May 2025 14:11:13 +0000 (0:00:00.053) 0:00:04.116 ********* 2025-05-14 14:11:13.940042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-14 14:11:13.940525 | orchestrator | 2025-05-14 14:11:13.941434 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 14:11:13.942369 | orchestrator | Wednesday 14 May 2025 14:11:13 +0000 (0:00:00.085) 0:00:04.201 ********* 2025-05-14 14:11:14.028808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 14:11:14.028935 | orchestrator | 2025-05-14 14:11:14.030247 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 14:11:14.030272 | orchestrator | Wednesday 14 May 2025 14:11:14 +0000 (0:00:00.088) 0:00:04.289 ********* 2025-05-14 14:11:15.159568 | orchestrator | ok: [testbed-manager] 2025-05-14 14:11:15.159861 | orchestrator | 2025-05-14 14:11:15.160464 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 14:11:15.161371 | orchestrator | Wednesday 14 May 2025 14:11:15 +0000 (0:00:01.130) 0:00:05.420 ********* 2025-05-14 14:11:15.204494 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:11:15.204762 | orchestrator | 2025-05-14 14:11:15.206123 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 14:11:15.206611 | orchestrator | Wednesday 14 May 2025 14:11:15 +0000 (0:00:00.045) 0:00:05.466 ********* 2025-05-14 14:11:15.668523 | orchestrator | ok: [testbed-manager] 2025-05-14 14:11:15.669766 | orchestrator | 2025-05-14 14:11:15.670457 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 14:11:15.671673 | orchestrator | Wednesday 14 May 2025 14:11:15 +0000 (0:00:00.463) 0:00:05.929 ********* 2025-05-14 14:11:15.747924 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:11:15.748128 | orchestrator | 2025-05-14 14:11:15.748726 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 14:11:15.750014 | orchestrator | Wednesday 14 May 2025 14:11:15 +0000 (0:00:00.079) 0:00:06.009 ********* 2025-05-14 14:11:16.315746 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:16.315878 | orchestrator | 2025-05-14 14:11:16.316491 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 14:11:16.316695 | orchestrator | Wednesday 14 May 2025 14:11:16 +0000 (0:00:00.565) 0:00:06.574 ********* 2025-05-14 14:11:17.449351 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:17.450013 | orchestrator | 2025-05-14 14:11:17.450623 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 14:11:17.453134 | orchestrator | Wednesday 14 May 2025 14:11:17 +0000 (0:00:01.132) 0:00:07.707 ********* 2025-05-14 14:11:18.402211 | orchestrator | ok: [testbed-manager] 2025-05-14 14:11:18.402418 | orchestrator | 2025-05-14 14:11:18.402698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 14:11:18.403388 | orchestrator | Wednesday 14 May 2025 14:11:18 +0000 (0:00:00.953) 0:00:08.661 ********* 2025-05-14 14:11:18.489005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-14 14:11:18.489152 | orchestrator | 2025-05-14 14:11:18.489229 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 14:11:18.489773 | orchestrator | Wednesday 14 May 2025 14:11:18 +0000 (0:00:00.088) 0:00:08.749 ********* 2025-05-14 14:11:19.678228 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:19.678387 | orchestrator | 2025-05-14 14:11:19.678468 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:11:19.678602 | orchestrator | 2025-05-14 14:11:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:11:19.678760 | orchestrator | 2025-05-14 14:11:19 | INFO  | Please wait and do not abort execution. 2025-05-14 14:11:19.679809 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:11:19.680139 | orchestrator | 2025-05-14 14:11:19.680809 | orchestrator | Wednesday 14 May 2025 14:11:19 +0000 (0:00:01.188) 0:00:09.937 ********* 2025-05-14 14:11:19.681254 | orchestrator | =============================================================================== 2025-05-14 14:11:19.682297 | orchestrator | Gathering Facts --------------------------------------------------------- 3.98s 2025-05-14 14:11:19.682576 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2025-05-14 14:11:19.683339 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2025-05-14 14:11:19.684048 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-05-14 14:11:19.684348 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-05-14 14:11:19.684672 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2025-05-14 14:11:19.685055 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-05-14 14:11:19.685372 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-14 14:11:19.685741 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-14 14:11:19.686256 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-14 14:11:19.686671 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-14 14:11:19.686910 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-14 14:11:19.687812 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-14 14:11:20.101389 | orchestrator | + osism apply sshconfig 2025-05-14 14:11:21.446144 | orchestrator | 2025-05-14 14:11:21 | INFO  | Task 489c62d4-eed8-4829-a72b-f8be5b5b6fdf (sshconfig) was prepared for execution. 2025-05-14 14:11:21.446249 | orchestrator | 2025-05-14 14:11:21 | INFO  | It takes a moment until task 489c62d4-eed8-4829-a72b-f8be5b5b6fdf (sshconfig) has been started and output is visible here. 2025-05-14 14:11:24.550849 | orchestrator | 2025-05-14 14:11:24.551209 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-14 14:11:24.552220 | orchestrator | 2025-05-14 14:11:24.553679 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-14 14:11:24.554223 | orchestrator | Wednesday 14 May 2025 14:11:24 +0000 (0:00:00.112) 0:00:00.112 ********* 2025-05-14 14:11:25.180711 | orchestrator | ok: [testbed-manager] 2025-05-14 14:11:25.181118 | orchestrator | 2025-05-14 14:11:25.182246 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-14 14:11:25.184370 | orchestrator | Wednesday 14 May 2025 14:11:25 +0000 (0:00:00.629) 0:00:00.741 ********* 2025-05-14 14:11:25.674730 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:25.674912 | orchestrator | 2025-05-14 14:11:25.676598 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-14 14:11:25.676961 | orchestrator | Wednesday 14 May 2025 14:11:25 +0000 (0:00:00.494) 0:00:01.236 ********* 2025-05-14 14:11:31.594864 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-14 14:11:31.595041 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-14 14:11:31.595071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-14 14:11:31.595186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-14 14:11:31.595444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 14:11:31.596208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-14 14:11:31.598179 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-14 14:11:31.598369 | orchestrator | 2025-05-14 14:11:31.598850 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-14 14:11:31.600433 | orchestrator | Wednesday 14 May 2025 14:11:31 +0000 (0:00:05.915) 0:00:07.152 ********* 2025-05-14 14:11:31.670288 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:11:31.670712 | orchestrator | 2025-05-14 14:11:31.672589 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-14 14:11:31.672797 | orchestrator | Wednesday 14 May 2025 14:11:31 +0000 (0:00:00.079) 0:00:07.232 ********* 2025-05-14 14:11:32.275317 | orchestrator | changed: [testbed-manager] 2025-05-14 14:11:32.275424 | orchestrator | 2025-05-14 14:11:32.275442 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:11:32.275458 | orchestrator | 2025-05-14 14:11:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:11:32.275470 | orchestrator | 2025-05-14 14:11:32 | INFO  | Please wait and do not abort execution. 2025-05-14 14:11:32.275835 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:11:32.277252 | orchestrator | 2025-05-14 14:11:32.277760 | orchestrator | Wednesday 14 May 2025 14:11:32 +0000 (0:00:00.604) 0:00:07.837 ********* 2025-05-14 14:11:32.278087 | orchestrator | =============================================================================== 2025-05-14 14:11:32.278657 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.92s 2025-05-14 14:11:32.278946 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-05-14 14:11:32.279516 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-05-14 14:11:32.279780 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-14 14:11:32.280030 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-14 14:11:32.771140 | orchestrator | + osism apply known-hosts 2025-05-14 14:11:34.206180 | orchestrator | 2025-05-14 14:11:34 | INFO  | Task b8a2e4ec-3bd6-4ddc-91be-fe926b3cd2f5 (known-hosts) was prepared for execution. 2025-05-14 14:11:34.206314 | orchestrator | 2025-05-14 14:11:34 | INFO  | It takes a moment until task b8a2e4ec-3bd6-4ddc-91be-fe926b3cd2f5 (known-hosts) has been started and output is visible here. 2025-05-14 14:11:37.239958 | orchestrator | 2025-05-14 14:11:37.240192 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-14 14:11:37.241320 | orchestrator | 2025-05-14 14:11:37.244542 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-14 14:11:37.245174 | orchestrator | Wednesday 14 May 2025 14:11:37 +0000 (0:00:00.108) 0:00:00.108 ********* 2025-05-14 14:11:43.322112 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 14:11:43.323460 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 14:11:43.324728 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 14:11:43.325256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 14:11:43.326793 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 14:11:43.327151 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 14:11:43.327568 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 14:11:43.331499 | orchestrator | 2025-05-14 14:11:43.331538 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-14 14:11:43.334707 | orchestrator | Wednesday 14 May 2025 14:11:43 +0000 (0:00:06.081) 0:00:06.189 ********* 2025-05-14 14:11:43.512989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 14:11:43.513226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 14:11:43.513775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 14:11:43.514084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 14:11:43.516012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 14:11:43.516055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 14:11:43.516069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 14:11:43.516082 | orchestrator | 2025-05-14 14:11:43.516560 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:43.516894 | orchestrator | Wednesday 14 May 2025 14:11:43 +0000 (0:00:00.188) 0:00:06.378 ********* 2025-05-14 14:11:44.798338 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9MWEEF7JbxMLVfzEI1iLNmcqWdhdkCdtkYHP+gqjRyCPDUkpItUG6pqucHSBk2GWxKBQO/B2KrcP6EcgpULxs=) 2025-05-14 14:11:44.798534 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTMU1H/+n5lMkFv0BKps7VC/8AZzu7j0RZ3ktzY7szbz33Nd689U3056FoaWS37LiBE66WStyeAtJwuEJ649+tLHXa7OzFdne8ZskMXPRWwXDspjq48ChyOrXUB6DGUq977BiKbeL15CYMmfC5dMzLUFrNxAjWxM3U/UoadLZXjX51Q+zjcqH0rZ4pD+p8SCInJJXFKmsShfIlHJl41B326QgGPxoRRyKIXD3uREi0ShkLBKeKOHSUJDVvMoJgtw9YIIoJS0wiShJiUSYqFPcb+u81PSsMWv97Q8Faz0AZpWcyMtBEyus+Ph3Qs2P29C90Y5bY+kvbo5Aul4Y1Wl7Ap0LZ0BsE18TM82dxUF7NTWHfw7isQrc4OPshs5hTRzglIeBwG6cABWi3rBaf4Khzo8VBDmNCLRag6V8wUd1nSO4tdtUrjG7bNuF3GccEoShUCXx8V1ChR+BXFwjkQpx4p1Yzns0p5u6wixab2cOwuUTImN/h8v1GJyUWwSBGK6c=) 2025-05-14 14:11:44.798591 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEzF1ol0cPDrj16H3s39RU6+9tHgorxgxcTOCpR45Fn) 2025-05-14 14:11:44.799110 | orchestrator | 2025-05-14 14:11:44.799137 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:44.799490 | orchestrator | Wednesday 14 May 2025 14:11:44 +0000 (0:00:01.288) 0:00:07.666 ********* 2025-05-14 14:11:45.977022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOODdyfncuFzEeM/F8c/K04s//muA1+1UfjB8LPc6sWzUy5LHQF7pb+su/kPwKG2TYqhctLzXgvj9CTrQlVJJDU=) 2025-05-14 14:11:45.977130 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuqUctqAZ/wJmGxYBmX726nzC1J4QY8E5x9XyvKWDuP) 2025-05-14 14:11:45.977253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDBuvZ9gcemx409gXPUn3bKWmzM7XICSIM3RwLSvtUDNEHZyesaBtamF4eb4vZyG+l3x3oHqCkyywvq77FSIKJ0HE+QpFg/ICSZvjMojU6lQSZ6UbMSLwECJl44KtzDL18A5ANiPMp/wBwAYVOsDg7l3ZYiOT+00OWqegqMsW0o+AlGThCRkghit29dba4ibPbLgv6WoFbTiLazU6lPVbqFliZyZsN1S86sJp54aqkgvw3dMoPgOJpObJU3MOpRQ+S3hFwTplJuYnlp75uDwB3Qiw/KVdsj27XQ5UsF4biVrqIJtRpjWGq1+hyy1IH2tcCRaA0OXnLAendbZZnE3a4iDHfG0rsylgAAgz4XZMPK0kmdkI80mS4Zrax9F/Yr3eyM7xtx9ut4i8CCY1kXD6NIHrrWPmKde+dIbmjBoM42TLp+ldhvIuj+yEmIGMA1fNjQO9PJOBIF85+P/VJ8qEk3cgvTNbGMzB6301pCnyp9Aew7czrAiM9Q4bHbYI1vtE=) 2025-05-14 14:11:45.977271 | orchestrator | 2025-05-14 14:11:45.977437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:45.977500 | orchestrator | Wednesday 14 May 2025 14:11:45 +0000 (0:00:01.179) 0:00:08.845 ********* 2025-05-14 14:11:47.088910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZT010CbJ4afRnb/QnUNaj+3HBd2PUqte+MxuKta6iqTkYsEvhime8JvSryz8il9W2IEHWrs4XTRWQjtK+MoW0BjfIvleAOq3SJ/siUzakx2vL4aou2np03XVmFghav2te3r3S/WvrBjZZNlTokFYyRmA/8FkevM6Y0D+hIerBrFq78K22S2APejpQzQyqO+5NB3wNdiq7NJRim6tO2OH4XXWOxNm9WhMFL+ggHRwI1Pp8TSN0Ods+jTBbRcRjLM/xxvdE9hl9tQeSMJUdV7/JSlXMUOAlV4jG1f1bbKil93jKRt4ronROR/nlcibshIXDCnIk5iUGlqrcGPUkQof4DO0staw2uRZCWbxvEtTp2nGh+aPsgt1VPn5zY8xNMhF/TsmOBZdGpXBlAQEBq3/y5ckVjU2NCMNGsxMsENUXCStWZ3r3/TO+ItWgnrYe1IJI77QRwDKitOE9hfALME04g+T7A92fvZfQSYw+PzHnC1za9p6oOSoVhnZLggKVE7U=) 2025-05-14 14:11:47.089116 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPiwRtcp/6Jr7gJlfp1O3bio4/D5MueGMDzrkkCIZykYs2i526FFkti0KWl1HQp1GmWweYUjMkrOu26H7mM7BvE=) 2025-05-14 14:11:47.090470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEuJDVLVhUj2aQhB4J9fVkBaQo2PMc6FxJEO2y/7RfUs) 2025-05-14 14:11:47.091508 | orchestrator | 2025-05-14 14:11:47.093055 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:47.094510 | orchestrator | Wednesday 14 May 2025 14:11:47 +0000 (0:00:01.112) 0:00:09.958 ********* 2025-05-14 14:11:48.179863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCBlbASy79nxReR+GMqRDzUDfQBUBBkmPep5D9e31TAetoKASZkCWURNjYvKVWYEM08koqCICcXWLfBx0vvzXlFF8r5YKRC5jjuGhUzGFod7YdpaTNf/ERXo4BgvOtS8hbZF7ujg7KqHKNs7f/sGJOwyNq+Lu4rU+jBuQBDm2vf4jTQHSvfOUJXcX3wfQZ7/uof1YOpmCOlnhpMPsDzAgI1X/77lU2dl+oVtyrfS7fR+LnTk8lytU6Zeu6sMBsXHAfmkxBMQMd9u3AIjJ8l+tam8uorwylUmliMJtNUXKZZrmBF0rIaisJ2K4JZ05mGDRjU19J9hMqyjBysnr1iIXBf5sesmELDwLOCRE3PfOFGjyPdQhPRrfr2qZB7ThrxPdgE4NjJ/bH+RXdayn/rX/ILWqOhT+x2x39lDZ7EO98sGE5HyMUNjMJw2fBX8OjMc69GfUjBORGSDEU7jK+mgig7O/cdeqFaUHvWZnFu80vK9kRNhHYXWVEIJDEbuhYDkxE=) 2025-05-14 14:11:48.180000 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/8HbQgJ5FQtmMLJuqShmYVQZuudxFMsOBbVZhh4COQ) 2025-05-14 14:11:48.180412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOrlDssjybqhV8W7J0Esst7vF6+tU6tFqAY9EV3SY04opRxaE+du2mzDs4wgMohy8ymLadrU2Ox9NVs6/Vif2AM=) 2025-05-14 14:11:48.181444 | orchestrator | 2025-05-14 14:11:48.181807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:48.182210 | orchestrator | Wednesday 14 May 2025 14:11:48 +0000 (0:00:01.091) 0:00:11.049 ********* 2025-05-14 14:11:49.256167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1mmrURsFW50jrf5dm+NsP3biUR9EF5GOPhpNurBCcKVonr45Ny5reb4GY7t3NbE10px/bzx8wJBC6N2BLBLQ7+onMydR+Ug7cpcDyuR5c9JJ42WhFDrDHrz4cJzgHw1aBY5+Z9WD+ZEnR8YZYQPW2H9h8a/y+QCZcAJXnskrmqcs0f6ovWZIb41DgyvgjsiGWGs2EvZ8Un/7xpaZv8GzrOGQBjGBDA4gr8DlVM2ACW02rE1/admYDNyHq27maq9n7+lDLq/8Uwp+JAcCwwbkt2LNO14iR/eVoutqqYjWwIUPVWTryt006cL4HvTB/fOPHYR1+TL00QKkCwdQDw0pCITmiDtvc7y7NLqg3SJLIQBP4fAhOSI9AeGSTh1j3QokmNT33c+UvAUgGqD4o6yRW/2HTA+XmcYzlV1HgE28jDUBax4QAcFgNbOwFa/8JTzHkfxYtWgkIrlUo3uPC2Dxqo6+LoyeHjJQi/DKdakVh/x730DKwYWX0TRT6RgeNpcM=) 2025-05-14 14:11:49.256279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOwdyyI+ghwnYuRndgOjObHUn5NwxT2edHHrc8PzsNifPvUHcKUiImp3J5fA7S291gflt4ZR8yMIfjF+rkBhUM=) 2025-05-14 14:11:49.257575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxyVh0SNXb9owdUzOEt00toJ91iH1c7NfsBDGhrcMXD) 2025-05-14 14:11:49.258417 | orchestrator | 2025-05-14 14:11:49.259381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:49.259781 | orchestrator | Wednesday 14 May 2025 14:11:49 +0000 (0:00:01.075) 0:00:12.125 ********* 2025-05-14 14:11:50.381158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoZo5NZvZ0rG1u5kcawqX8I5HsgOTTJzZJ9OIZFyzhidxCT6Ao6qSOPNOKwPlZ8ZPGm2JAZIGCk6cBjSKYp6cFUELOiYiUcyWdfDBN7TR307FtCX4z9PjAiSFyfrAiLX6zzYyc/Ika8CYqDj4tmRZ7ay3dO0qihyGlVn9XNvlNumYF2nH7sT5zb5PIrA0v0vHmOHpm+0jalo2drelHD8N6iaZ9uX5FgfO2W1JiDZ+VdsL3FxhO/6X/0Qs9S80w2KQl18gBkJRxqngVgwvgDMzWgupfdIZECEUshYdbuVQ1Vs/H2F3/ktQHVckjaSntCKMSKk1TxA/IHRPNd84+zxqEim2oGRsL4taYgpKzI+oC4ACLQOLdlOTrRGTtEr8nwUtvomL22nG5cNsDtHljKiZQExPkkVjiSQHGcqkiWSeVdb9eadtTGZS881V5wNGBj78WjdQVf7TM/QvrxTj+nrKfS6VASVK/c0mXsbq7C13bn+mpJErO1nCTiBxmQ3/+3uE=) 2025-05-14 14:11:50.381871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBXCJ0OuJfynZ5sxupIKq8O6+x1dROzhiDLpDjhuTbs) 2025-05-14 14:11:50.383044 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASf/p6PO/YHGEUNvfC3xj45ypg7Hejg37r6RiVH8V09WKuQT9KP7r1ETYVS3crtj7gnWXdRtZmOH1RoGvMuXTY=) 2025-05-14 14:11:50.384253 | orchestrator | 2025-05-14 14:11:50.385329 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:50.385925 | orchestrator | Wednesday 14 May 2025 14:11:50 +0000 (0:00:01.125) 0:00:13.250 ********* 2025-05-14 14:11:51.524672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1bPLDhtDb9z9LtCAsdS5/6BS4d6d1ywVqFIRxHI8mwM83eukZ9ENjCOEo9bKLVOHxrzX+fbaMcVv7WD2S6r4Qo5e916DljSZpbHjRgaOZFdF63KiZLYmBr60EMoTE+qWq+JPf24rIfSbK3NXuB8/ou81xzGxvC5k1E+lILczC0z9pHG3E4gjsxf/WKq5ulASiYQ+Ppu4htcRz7m2rtQ/lb3xhtm4XYWbsXGI8jK/8Ov0m04JtwTq8KFR3CPbfLd5Zwth37mCdVs8JCX1DRwOuEfmYvMIBRKQxYZThVdRz8tzYDltOK8qb6/+5XdHqiKtjV6FR94BQsCh18TjyNghkG/ORw2rlHP2i6Yv3zdeOwV4Bnk5oKaZp++YODjPmgYZgAJCcUxR7VKnAT97Qm4pi/qaDdrhZ9sDqQfN39rmWhEzBA+WEcsVnqRbFVC4VZMPfM5d5PnXuNeje5/Zny4xZ8Qq4xM3uqsfhtFv/Yx8u71iRZI1LD5tfc7qMHd3+OHM=) 2025-05-14 14:11:51.524784 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAxEZNNLfBzUXLzVE+9CGgL87BVObkXUHmtYhsqIiS1dg3Dg8M3uIHr7rDtkepRZamj5B33y9rngucvfM/OBG5s=) 2025-05-14 14:11:51.525397 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoNKl5H63rTiwITS41KvAoV+exlADNp0FKlDbzyrazn) 2025-05-14 14:11:51.525979 | orchestrator | 2025-05-14 14:11:51.526420 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-14 14:11:51.526928 | orchestrator | Wednesday 14 May 2025 14:11:51 +0000 (0:00:01.143) 0:00:14.393 ********* 2025-05-14 14:11:57.043900 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 14:11:57.044050 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 14:11:57.044066 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 14:11:57.044400 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 14:11:57.046657 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 14:11:57.047639 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 14:11:57.048038 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 14:11:57.049129 | orchestrator | 2025-05-14 14:11:57.050426 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-14 14:11:57.050444 | orchestrator | Wednesday 14 May 2025 14:11:57 +0000 (0:00:05.517) 0:00:19.911 ********* 2025-05-14 14:11:57.222662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 14:11:57.223606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 14:11:57.224576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 14:11:57.225130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 14:11:57.226669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 14:11:57.229045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 14:11:57.229909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 14:11:57.231368 | orchestrator | 2025-05-14 14:11:57.231717 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:57.232800 | orchestrator | Wednesday 14 May 2025 14:11:57 +0000 (0:00:00.182) 0:00:20.093 ********* 2025-05-14 14:11:58.377001 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9MWEEF7JbxMLVfzEI1iLNmcqWdhdkCdtkYHP+gqjRyCPDUkpItUG6pqucHSBk2GWxKBQO/B2KrcP6EcgpULxs=) 2025-05-14 14:11:58.378800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTMU1H/+n5lMkFv0BKps7VC/8AZzu7j0RZ3ktzY7szbz33Nd689U3056FoaWS37LiBE66WStyeAtJwuEJ649+tLHXa7OzFdne8ZskMXPRWwXDspjq48ChyOrXUB6DGUq977BiKbeL15CYMmfC5dMzLUFrNxAjWxM3U/UoadLZXjX51Q+zjcqH0rZ4pD+p8SCInJJXFKmsShfIlHJl41B326QgGPxoRRyKIXD3uREi0ShkLBKeKOHSUJDVvMoJgtw9YIIoJS0wiShJiUSYqFPcb+u81PSsMWv97Q8Faz0AZpWcyMtBEyus+Ph3Qs2P29C90Y5bY+kvbo5Aul4Y1Wl7Ap0LZ0BsE18TM82dxUF7NTWHfw7isQrc4OPshs5hTRzglIeBwG6cABWi3rBaf4Khzo8VBDmNCLRag6V8wUd1nSO4tdtUrjG7bNuF3GccEoShUCXx8V1ChR+BXFwjkQpx4p1Yzns0p5u6wixab2cOwuUTImN/h8v1GJyUWwSBGK6c=) 2025-05-14 14:11:58.379776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEzF1ol0cPDrj16H3s39RU6+9tHgorxgxcTOCpR45Fn) 2025-05-14 14:11:58.380875 | orchestrator | 2025-05-14 14:11:58.381865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:58.382576 | orchestrator | Wednesday 14 May 2025 14:11:58 +0000 (0:00:01.153) 0:00:21.246 ********* 2025-05-14 14:11:59.481374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDBuvZ9gcemx409gXPUn3bKWmzM7XICSIM3RwLSvtUDNEHZyesaBtamF4eb4vZyG+l3x3oHqCkyywvq77FSIKJ0HE+QpFg/ICSZvjMojU6lQSZ6UbMSLwECJl44KtzDL18A5ANiPMp/wBwAYVOsDg7l3ZYiOT+00OWqegqMsW0o+AlGThCRkghit29dba4ibPbLgv6WoFbTiLazU6lPVbqFliZyZsN1S86sJp54aqkgvw3dMoPgOJpObJU3MOpRQ+S3hFwTplJuYnlp75uDwB3Qiw/KVdsj27XQ5UsF4biVrqIJtRpjWGq1+hyy1IH2tcCRaA0OXnLAendbZZnE3a4iDHfG0rsylgAAgz4XZMPK0kmdkI80mS4Zrax9F/Yr3eyM7xtx9ut4i8CCY1kXD6NIHrrWPmKde+dIbmjBoM42TLp+ldhvIuj+yEmIGMA1fNjQO9PJOBIF85+P/VJ8qEk3cgvTNbGMzB6301pCnyp9Aew7czrAiM9Q4bHbYI1vtE=) 2025-05-14 14:11:59.481517 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOODdyfncuFzEeM/F8c/K04s//muA1+1UfjB8LPc6sWzUy5LHQF7pb+su/kPwKG2TYqhctLzXgvj9CTrQlVJJDU=) 2025-05-14 14:11:59.481620 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuqUctqAZ/wJmGxYBmX726nzC1J4QY8E5x9XyvKWDuP) 2025-05-14 14:11:59.483709 | orchestrator | 2025-05-14 14:11:59.484708 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:11:59.485417 | orchestrator | Wednesday 14 May 2025 14:11:59 +0000 (0:00:01.105) 0:00:22.352 ********* 2025-05-14 14:12:00.574508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEuJDVLVhUj2aQhB4J9fVkBaQo2PMc6FxJEO2y/7RfUs) 2025-05-14 14:12:00.574735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZT010CbJ4afRnb/QnUNaj+3HBd2PUqte+MxuKta6iqTkYsEvhime8JvSryz8il9W2IEHWrs4XTRWQjtK+MoW0BjfIvleAOq3SJ/siUzakx2vL4aou2np03XVmFghav2te3r3S/WvrBjZZNlTokFYyRmA/8FkevM6Y0D+hIerBrFq78K22S2APejpQzQyqO+5NB3wNdiq7NJRim6tO2OH4XXWOxNm9WhMFL+ggHRwI1Pp8TSN0Ods+jTBbRcRjLM/xxvdE9hl9tQeSMJUdV7/JSlXMUOAlV4jG1f1bbKil93jKRt4ronROR/nlcibshIXDCnIk5iUGlqrcGPUkQof4DO0staw2uRZCWbxvEtTp2nGh+aPsgt1VPn5zY8xNMhF/TsmOBZdGpXBlAQEBq3/y5ckVjU2NCMNGsxMsENUXCStWZ3r3/TO+ItWgnrYe1IJI77QRwDKitOE9hfALME04g+T7A92fvZfQSYw+PzHnC1za9p6oOSoVhnZLggKVE7U=) 2025-05-14 14:12:00.575948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPiwRtcp/6Jr7gJlfp1O3bio4/D5MueGMDzrkkCIZykYs2i526FFkti0KWl1HQp1GmWweYUjMkrOu26H7mM7BvE=) 2025-05-14 14:12:00.576863 | orchestrator | 2025-05-14 14:12:00.579695 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:12:00.580165 | orchestrator | Wednesday 14 May 2025 14:12:00 +0000 (0:00:01.091) 0:00:23.444 ********* 2025-05-14 14:12:01.673983 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCBlbASy79nxReR+GMqRDzUDfQBUBBkmPep5D9e31TAetoKASZkCWURNjYvKVWYEM08koqCICcXWLfBx0vvzXlFF8r5YKRC5jjuGhUzGFod7YdpaTNf/ERXo4BgvOtS8hbZF7ujg7KqHKNs7f/sGJOwyNq+Lu4rU+jBuQBDm2vf4jTQHSvfOUJXcX3wfQZ7/uof1YOpmCOlnhpMPsDzAgI1X/77lU2dl+oVtyrfS7fR+LnTk8lytU6Zeu6sMBsXHAfmkxBMQMd9u3AIjJ8l+tam8uorwylUmliMJtNUXKZZrmBF0rIaisJ2K4JZ05mGDRjU19J9hMqyjBysnr1iIXBf5sesmELDwLOCRE3PfOFGjyPdQhPRrfr2qZB7ThrxPdgE4NjJ/bH+RXdayn/rX/ILWqOhT+x2x39lDZ7EO98sGE5HyMUNjMJw2fBX8OjMc69GfUjBORGSDEU7jK+mgig7O/cdeqFaUHvWZnFu80vK9kRNhHYXWVEIJDEbuhYDkxE=) 2025-05-14 14:12:01.675512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOrlDssjybqhV8W7J0Esst7vF6+tU6tFqAY9EV3SY04opRxaE+du2mzDs4wgMohy8ymLadrU2Ox9NVs6/Vif2AM=) 2025-05-14 14:12:01.675824 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/8HbQgJ5FQtmMLJuqShmYVQZuudxFMsOBbVZhh4COQ) 2025-05-14 14:12:01.676714 | orchestrator | 2025-05-14 14:12:01.676761 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:12:01.677433 | orchestrator | Wednesday 14 May 2025 14:12:01 +0000 (0:00:01.099) 0:00:24.543 ********* 2025-05-14 14:12:02.795044 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1mmrURsFW50jrf5dm+NsP3biUR9EF5GOPhpNurBCcKVonr45Ny5reb4GY7t3NbE10px/bzx8wJBC6N2BLBLQ7+onMydR+Ug7cpcDyuR5c9JJ42WhFDrDHrz4cJzgHw1aBY5+Z9WD+ZEnR8YZYQPW2H9h8a/y+QCZcAJXnskrmqcs0f6ovWZIb41DgyvgjsiGWGs2EvZ8Un/7xpaZv8GzrOGQBjGBDA4gr8DlVM2ACW02rE1/admYDNyHq27maq9n7+lDLq/8Uwp+JAcCwwbkt2LNO14iR/eVoutqqYjWwIUPVWTryt006cL4HvTB/fOPHYR1+TL00QKkCwdQDw0pCITmiDtvc7y7NLqg3SJLIQBP4fAhOSI9AeGSTh1j3QokmNT33c+UvAUgGqD4o6yRW/2HTA+XmcYzlV1HgE28jDUBax4QAcFgNbOwFa/8JTzHkfxYtWgkIrlUo3uPC2Dxqo6+LoyeHjJQi/DKdakVh/x730DKwYWX0TRT6RgeNpcM=) 2025-05-14 14:12:02.795305 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOwdyyI+ghwnYuRndgOjObHUn5NwxT2edHHrc8PzsNifPvUHcKUiImp3J5fA7S291gflt4ZR8yMIfjF+rkBhUM=) 2025-05-14 14:12:02.795817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxyVh0SNXb9owdUzOEt00toJ91iH1c7NfsBDGhrcMXD) 2025-05-14 14:12:02.796624 | orchestrator | 2025-05-14 14:12:02.797273 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:12:02.797841 | orchestrator | Wednesday 14 May 2025 14:12:02 +0000 (0:00:01.120) 0:00:25.664 ********* 2025-05-14 14:12:03.952503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBXCJ0OuJfynZ5sxupIKq8O6+x1dROzhiDLpDjhuTbs) 2025-05-14 14:12:03.952716 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoZo5NZvZ0rG1u5kcawqX8I5HsgOTTJzZJ9OIZFyzhidxCT6Ao6qSOPNOKwPlZ8ZPGm2JAZIGCk6cBjSKYp6cFUELOiYiUcyWdfDBN7TR307FtCX4z9PjAiSFyfrAiLX6zzYyc/Ika8CYqDj4tmRZ7ay3dO0qihyGlVn9XNvlNumYF2nH7sT5zb5PIrA0v0vHmOHpm+0jalo2drelHD8N6iaZ9uX5FgfO2W1JiDZ+VdsL3FxhO/6X/0Qs9S80w2KQl18gBkJRxqngVgwvgDMzWgupfdIZECEUshYdbuVQ1Vs/H2F3/ktQHVckjaSntCKMSKk1TxA/IHRPNd84+zxqEim2oGRsL4taYgpKzI+oC4ACLQOLdlOTrRGTtEr8nwUtvomL22nG5cNsDtHljKiZQExPkkVjiSQHGcqkiWSeVdb9eadtTGZS881V5wNGBj78WjdQVf7TM/QvrxTj+nrKfS6VASVK/c0mXsbq7C13bn+mpJErO1nCTiBxmQ3/+3uE=) 2025-05-14 14:12:03.953418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASf/p6PO/YHGEUNvfC3xj45ypg7Hejg37r6RiVH8V09WKuQT9KP7r1ETYVS3crtj7gnWXdRtZmOH1RoGvMuXTY=) 2025-05-14 14:12:03.953669 | orchestrator | 2025-05-14 14:12:03.955127 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 14:12:03.956276 | orchestrator | Wednesday 14 May 2025 14:12:03 +0000 (0:00:01.155) 0:00:26.820 ********* 2025-05-14 14:12:05.030971 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAxEZNNLfBzUXLzVE+9CGgL87BVObkXUHmtYhsqIiS1dg3Dg8M3uIHr7rDtkepRZamj5B33y9rngucvfM/OBG5s=) 2025-05-14 14:12:05.031224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1bPLDhtDb9z9LtCAsdS5/6BS4d6d1ywVqFIRxHI8mwM83eukZ9ENjCOEo9bKLVOHxrzX+fbaMcVv7WD2S6r4Qo5e916DljSZpbHjRgaOZFdF63KiZLYmBr60EMoTE+qWq+JPf24rIfSbK3NXuB8/ou81xzGxvC5k1E+lILczC0z9pHG3E4gjsxf/WKq5ulASiYQ+Ppu4htcRz7m2rtQ/lb3xhtm4XYWbsXGI8jK/8Ov0m04JtwTq8KFR3CPbfLd5Zwth37mCdVs8JCX1DRwOuEfmYvMIBRKQxYZThVdRz8tzYDltOK8qb6/+5XdHqiKtjV6FR94BQsCh18TjyNghkG/ORw2rlHP2i6Yv3zdeOwV4Bnk5oKaZp++YODjPmgYZgAJCcUxR7VKnAT97Qm4pi/qaDdrhZ9sDqQfN39rmWhEzBA+WEcsVnqRbFVC4VZMPfM5d5PnXuNeje5/Zny4xZ8Qq4xM3uqsfhtFv/Yx8u71iRZI1LD5tfc7qMHd3+OHM=) 2025-05-14 14:12:05.032177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoNKl5H63rTiwITS41KvAoV+exlADNp0FKlDbzyrazn) 2025-05-14 14:12:05.032518 | orchestrator | 2025-05-14 14:12:05.033280 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-14 14:12:05.033710 | orchestrator | Wednesday 14 May 2025 14:12:05 +0000 (0:00:01.078) 0:00:27.898 ********* 2025-05-14 14:12:05.200653 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 14:12:05.200754 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 14:12:05.200827 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 14:12:05.201390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 14:12:05.202464 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 14:12:05.202706 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 14:12:05.202738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 14:12:05.202874 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:12:05.203803 | orchestrator | 2025-05-14 14:12:05.203867 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-14 14:12:05.203908 | orchestrator | Wednesday 14 May 2025 14:12:05 +0000 (0:00:00.172) 0:00:28.071 ********* 2025-05-14 14:12:05.273759 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:12:05.273972 | orchestrator | 2025-05-14 14:12:05.274766 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-14 14:12:05.276442 | orchestrator | Wednesday 14 May 2025 14:12:05 +0000 (0:00:00.073) 0:00:28.145 ********* 2025-05-14 14:12:05.331026 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:12:05.331232 | orchestrator | 2025-05-14 14:12:05.332160 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-14 14:12:05.332758 | orchestrator | Wednesday 14 May 2025 14:12:05 +0000 (0:00:00.056) 0:00:28.201 ********* 2025-05-14 14:12:06.079043 | orchestrator | changed: [testbed-manager] 2025-05-14 14:12:06.080076 | orchestrator | 2025-05-14 14:12:06.082412 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:12:06.082472 | orchestrator | 2025-05-14 14:12:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:12:06.082512 | orchestrator | 2025-05-14 14:12:06 | INFO  | Please wait and do not abort execution. 2025-05-14 14:12:06.083195 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:12:06.084860 | orchestrator | 2025-05-14 14:12:06.086215 | orchestrator | Wednesday 14 May 2025 14:12:06 +0000 (0:00:00.748) 0:00:28.949 ********* 2025-05-14 14:12:06.088321 | orchestrator | =============================================================================== 2025-05-14 14:12:06.089846 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.08s 2025-05-14 14:12:06.090472 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.52s 2025-05-14 14:12:06.093491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2025-05-14 14:12:06.094154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-05-14 14:12:06.096780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-05-14 14:12:06.096841 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-14 14:12:06.097858 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-14 14:12:06.098879 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-05-14 14:12:06.099473 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-14 14:12:06.099813 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 14:12:06.100402 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 14:12:06.100899 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-14 14:12:06.101678 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-14 14:12:06.102166 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-14 14:12:06.102455 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 14:12:06.102983 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 14:12:06.104461 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2025-05-14 14:12:06.104982 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-05-14 14:12:06.105705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-14 14:12:06.107008 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-14 14:12:06.503330 | orchestrator | + osism apply squid 2025-05-14 14:12:07.931919 | orchestrator | 2025-05-14 14:12:07 | INFO  | Task 62250af5-372e-4ae7-8d10-4b4a626fecba (squid) was prepared for execution. 2025-05-14 14:12:07.932025 | orchestrator | 2025-05-14 14:12:07 | INFO  | It takes a moment until task 62250af5-372e-4ae7-8d10-4b4a626fecba (squid) has been started and output is visible here. 2025-05-14 14:12:11.008099 | orchestrator | 2025-05-14 14:12:11.008253 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-14 14:12:11.008799 | orchestrator | 2025-05-14 14:12:11.009785 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-14 14:12:11.010292 | orchestrator | Wednesday 14 May 2025 14:12:10 +0000 (0:00:00.109) 0:00:00.109 ********* 2025-05-14 14:12:11.103538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 14:12:11.104164 | orchestrator | 2025-05-14 14:12:11.105049 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-14 14:12:11.106083 | orchestrator | Wednesday 14 May 2025 14:12:11 +0000 (0:00:00.099) 0:00:00.209 ********* 2025-05-14 14:12:12.539059 | orchestrator | ok: [testbed-manager] 2025-05-14 14:12:12.539143 | orchestrator | 2025-05-14 14:12:12.539192 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-14 14:12:12.540108 | orchestrator | Wednesday 14 May 2025 14:12:12 +0000 (0:00:01.433) 0:00:01.643 ********* 2025-05-14 14:12:13.714817 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-14 14:12:13.714961 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-14 14:12:13.715828 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-14 14:12:13.717431 | orchestrator | 2025-05-14 14:12:13.718316 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-14 14:12:13.718425 | orchestrator | Wednesday 14 May 2025 14:12:13 +0000 (0:00:01.171) 0:00:02.814 ********* 2025-05-14 14:12:14.803243 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-14 14:12:14.803347 | orchestrator | 2025-05-14 14:12:14.803464 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-14 14:12:14.804373 | orchestrator | Wednesday 14 May 2025 14:12:14 +0000 (0:00:01.091) 0:00:03.906 ********* 2025-05-14 14:12:15.197510 | orchestrator | ok: [testbed-manager] 2025-05-14 14:12:15.197763 | orchestrator | 2025-05-14 14:12:15.197778 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-14 14:12:15.197788 | orchestrator | Wednesday 14 May 2025 14:12:15 +0000 (0:00:00.394) 0:00:04.300 ********* 2025-05-14 14:12:16.176217 | orchestrator | changed: [testbed-manager] 2025-05-14 14:12:16.176820 | orchestrator | 2025-05-14 14:12:16.177480 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-14 14:12:16.178178 | orchestrator | Wednesday 14 May 2025 14:12:16 +0000 (0:00:00.979) 0:00:05.280 ********* 2025-05-14 14:12:48.137439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-14 14:12:48.137604 | orchestrator | ok: [testbed-manager] 2025-05-14 14:12:48.137623 | orchestrator | 2025-05-14 14:12:48.137635 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-14 14:12:48.137647 | orchestrator | Wednesday 14 May 2025 14:12:48 +0000 (0:00:31.958) 0:00:37.238 ********* 2025-05-14 14:13:00.750451 | orchestrator | changed: [testbed-manager] 2025-05-14 14:13:00.750598 | orchestrator | 2025-05-14 14:13:00.750616 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-14 14:13:00.750629 | orchestrator | Wednesday 14 May 2025 14:13:00 +0000 (0:00:12.614) 0:00:49.853 ********* 2025-05-14 14:14:00.823018 | orchestrator | Pausing for 60 seconds 2025-05-14 14:14:00.823179 | orchestrator | changed: [testbed-manager] 2025-05-14 14:14:00.823206 | orchestrator | 2025-05-14 14:14:00.823226 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-14 14:14:00.824060 | orchestrator | Wednesday 14 May 2025 14:14:00 +0000 (0:01:00.071) 0:01:49.924 ********* 2025-05-14 14:14:00.886397 | orchestrator | ok: [testbed-manager] 2025-05-14 14:14:00.886813 | orchestrator | 2025-05-14 14:14:00.887935 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-14 14:14:00.888387 | orchestrator | Wednesday 14 May 2025 14:14:00 +0000 (0:00:00.068) 0:01:49.992 ********* 2025-05-14 14:14:01.455706 | orchestrator | changed: [testbed-manager] 2025-05-14 14:14:01.455818 | orchestrator | 2025-05-14 14:14:01.455834 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:14:01.455929 | orchestrator | 2025-05-14 14:14:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:14:01.455946 | orchestrator | 2025-05-14 14:14:01 | INFO  | Please wait and do not abort execution. 2025-05-14 14:14:01.456268 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:14:01.456504 | orchestrator | 2025-05-14 14:14:01.456700 | orchestrator | Wednesday 14 May 2025 14:14:01 +0000 (0:00:00.569) 0:01:50.562 ********* 2025-05-14 14:14:01.457058 | orchestrator | =============================================================================== 2025-05-14 14:14:01.457459 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-05-14 14:14:01.457827 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.96s 2025-05-14 14:14:01.458185 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.61s 2025-05-14 14:14:01.458517 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-05-14 14:14:01.458734 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-05-14 14:14:01.458994 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-05-14 14:14:01.459200 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2025-05-14 14:14:01.459355 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2025-05-14 14:14:01.459994 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-05-14 14:14:01.460282 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-14 14:14:01.460315 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-14 14:14:01.838510 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 14:14:01.838616 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-14 14:14:01.844026 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 14:14:01.893748 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-14 14:14:01.893839 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 14:14:01.893853 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-14 14:14:01.898186 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 14:14:01.901989 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 14:14:01.906393 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-14 14:14:03.290825 | orchestrator | 2025-05-14 14:14:03 | INFO  | Task 1070f0b9-ce82-4c1e-ab8a-ce6f5f67730b (operator) was prepared for execution. 2025-05-14 14:14:03.290923 | orchestrator | 2025-05-14 14:14:03 | INFO  | It takes a moment until task 1070f0b9-ce82-4c1e-ab8a-ce6f5f67730b (operator) has been started and output is visible here. 2025-05-14 14:14:06.250245 | orchestrator | 2025-05-14 14:14:06.250795 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-14 14:14:06.250825 | orchestrator | 2025-05-14 14:14:06.252700 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 14:14:06.253168 | orchestrator | Wednesday 14 May 2025 14:14:06 +0000 (0:00:00.086) 0:00:00.086 ********* 2025-05-14 14:14:09.561833 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:14:09.563047 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:14:09.563078 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:09.563512 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:14:09.564113 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:09.564633 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:09.565260 | orchestrator | 2025-05-14 14:14:09.565857 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-14 14:14:09.566169 | orchestrator | Wednesday 14 May 2025 14:14:09 +0000 (0:00:03.317) 0:00:03.404 ********* 2025-05-14 14:14:10.343191 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:14:10.343458 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:10.343690 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:10.344108 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:10.344606 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:14:10.345066 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:14:10.345466 | orchestrator | 2025-05-14 14:14:10.345920 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-14 14:14:10.346242 | orchestrator | 2025-05-14 14:14:10.346752 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 14:14:10.347977 | orchestrator | Wednesday 14 May 2025 14:14:10 +0000 (0:00:00.781) 0:00:04.185 ********* 2025-05-14 14:14:10.406456 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:14:10.419591 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:14:10.435730 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:14:10.470181 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:10.470242 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:10.470252 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:10.470262 | orchestrator | 2025-05-14 14:14:10.470272 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 14:14:10.470283 | orchestrator | Wednesday 14 May 2025 14:14:10 +0000 (0:00:00.124) 0:00:04.310 ********* 2025-05-14 14:14:10.515980 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:14:10.534374 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:14:10.547090 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:14:10.586379 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:10.586918 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:10.587558 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:10.588590 | orchestrator | 2025-05-14 14:14:10.589293 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 14:14:10.589529 | orchestrator | Wednesday 14 May 2025 14:14:10 +0000 (0:00:00.115) 0:00:04.425 ********* 2025-05-14 14:14:11.185666 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:11.186252 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:11.187330 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:11.188315 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:11.189260 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:11.189975 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:11.191212 | orchestrator | 2025-05-14 14:14:11.191865 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 14:14:11.192494 | orchestrator | Wednesday 14 May 2025 14:14:11 +0000 (0:00:00.600) 0:00:05.026 ********* 2025-05-14 14:14:11.984371 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:11.987410 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:11.988687 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:11.989576 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:11.990460 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:11.991513 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:11.992403 | orchestrator | 2025-05-14 14:14:11.993362 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 14:14:11.993898 | orchestrator | Wednesday 14 May 2025 14:14:11 +0000 (0:00:00.797) 0:00:05.824 ********* 2025-05-14 14:14:13.092886 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-14 14:14:13.096352 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-14 14:14:13.096537 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-14 14:14:13.097789 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-14 14:14:13.097915 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-14 14:14:13.098615 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-14 14:14:13.099074 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-14 14:14:13.100454 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-14 14:14:13.101603 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-14 14:14:13.102633 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-14 14:14:13.104487 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-14 14:14:13.105932 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-14 14:14:13.105955 | orchestrator | 2025-05-14 14:14:13.106518 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 14:14:13.107148 | orchestrator | Wednesday 14 May 2025 14:14:13 +0000 (0:00:01.108) 0:00:06.933 ********* 2025-05-14 14:14:14.383160 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:14.383381 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:14.387905 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:14.388050 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:14.389046 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:14.391377 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:14.392532 | orchestrator | 2025-05-14 14:14:14.393661 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 14:14:14.394100 | orchestrator | Wednesday 14 May 2025 14:14:14 +0000 (0:00:01.288) 0:00:08.221 ********* 2025-05-14 14:14:15.636232 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-14 14:14:15.636350 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-14 14:14:15.636364 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-14 14:14:15.768555 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.770135 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.770321 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.772004 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.773154 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.774056 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 14:14:15.775059 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.775637 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.777029 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.777166 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.778229 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.779181 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-14 14:14:15.779490 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.783927 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.783969 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.784241 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.784263 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.784275 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-14 14:14:15.784287 | orchestrator | 2025-05-14 14:14:15.784300 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 14:14:15.784312 | orchestrator | Wednesday 14 May 2025 14:14:15 +0000 (0:00:01.387) 0:00:09.608 ********* 2025-05-14 14:14:16.386098 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:16.386495 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:16.387474 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:16.388332 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:16.389343 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:16.392957 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:16.392982 | orchestrator | 2025-05-14 14:14:16.392996 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 14:14:16.393010 | orchestrator | Wednesday 14 May 2025 14:14:16 +0000 (0:00:00.617) 0:00:10.226 ********* 2025-05-14 14:14:16.448106 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:14:16.489395 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:14:16.532655 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:14:16.532765 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:16.533146 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:16.533547 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:16.534101 | orchestrator | 2025-05-14 14:14:16.534530 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 14:14:16.534999 | orchestrator | Wednesday 14 May 2025 14:14:16 +0000 (0:00:00.148) 0:00:10.375 ********* 2025-05-14 14:14:17.324294 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:14:17.324484 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:17.324915 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:14:17.325603 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:17.326083 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:14:17.327021 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 14:14:17.330792 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:14:17.331493 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:17.332101 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:17.332545 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:17.333126 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 14:14:17.333546 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:17.333982 | orchestrator | 2025-05-14 14:14:17.334373 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 14:14:17.334854 | orchestrator | Wednesday 14 May 2025 14:14:17 +0000 (0:00:00.789) 0:00:11.164 ********* 2025-05-14 14:14:17.380256 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:14:17.398821 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:14:17.420290 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:14:17.440451 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:17.465612 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:17.466627 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:17.467213 | orchestrator | 2025-05-14 14:14:17.467996 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 14:14:17.468376 | orchestrator | Wednesday 14 May 2025 14:14:17 +0000 (0:00:00.142) 0:00:11.307 ********* 2025-05-14 14:14:17.503974 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:14:17.522194 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:14:17.560357 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:14:17.587452 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:17.587543 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:17.590076 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:17.590104 | orchestrator | 2025-05-14 14:14:17.590133 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 14:14:17.590214 | orchestrator | Wednesday 14 May 2025 14:14:17 +0000 (0:00:00.121) 0:00:11.428 ********* 2025-05-14 14:14:17.631052 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:14:17.650906 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:14:17.677338 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:14:17.695894 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:17.722298 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:17.724914 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:17.724952 | orchestrator | 2025-05-14 14:14:17.724966 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 14:14:17.725003 | orchestrator | Wednesday 14 May 2025 14:14:17 +0000 (0:00:00.135) 0:00:11.563 ********* 2025-05-14 14:14:18.369359 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:18.369781 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:18.371046 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:18.372021 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:18.372744 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:18.374877 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:18.374978 | orchestrator | 2025-05-14 14:14:18.375771 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 14:14:18.376561 | orchestrator | Wednesday 14 May 2025 14:14:18 +0000 (0:00:00.645) 0:00:12.209 ********* 2025-05-14 14:14:18.430857 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:14:18.475007 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:14:18.575619 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:14:18.575715 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:18.578686 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:18.578724 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:18.581185 | orchestrator | 2025-05-14 14:14:18.581211 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:14:18.581239 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581252 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581263 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581274 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581307 | orchestrator | 2025-05-14 14:14:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:14:18.581320 | orchestrator | 2025-05-14 14:14:18 | INFO  | Please wait and do not abort execution. 2025-05-14 14:14:18.581397 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581618 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:14:18.581995 | orchestrator | 2025-05-14 14:14:18.582371 | orchestrator | Wednesday 14 May 2025 14:14:18 +0000 (0:00:00.205) 0:00:12.414 ********* 2025-05-14 14:14:18.582579 | orchestrator | =============================================================================== 2025-05-14 14:14:18.582956 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2025-05-14 14:14:18.583141 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2025-05-14 14:14:18.583436 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-05-14 14:14:18.583680 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.11s 2025-05-14 14:14:18.584195 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-05-14 14:14:18.584514 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.79s 2025-05-14 14:14:18.585087 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-05-14 14:14:18.585164 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-05-14 14:14:18.585615 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-05-14 14:14:18.586162 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-05-14 14:14:18.588052 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-05-14 14:14:18.588106 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-05-14 14:14:18.588118 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-14 14:14:18.588129 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-14 14:14:18.588139 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-05-14 14:14:18.588150 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2025-05-14 14:14:18.588216 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.12s 2025-05-14 14:14:18.955611 | orchestrator | + osism apply --environment custom facts 2025-05-14 14:14:20.151379 | orchestrator | 2025-05-14 14:14:20 | INFO  | Trying to run play facts in environment custom 2025-05-14 14:14:20.195213 | orchestrator | 2025-05-14 14:14:20 | INFO  | Task 4a6f4fb3-e1df-429f-9beb-e3ccdd2173dd (facts) was prepared for execution. 2025-05-14 14:14:20.195260 | orchestrator | 2025-05-14 14:14:20 | INFO  | It takes a moment until task 4a6f4fb3-e1df-429f-9beb-e3ccdd2173dd (facts) has been started and output is visible here. 2025-05-14 14:14:22.859592 | orchestrator | 2025-05-14 14:14:22.859696 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-14 14:14:22.860003 | orchestrator | 2025-05-14 14:14:22.861502 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 14:14:22.862735 | orchestrator | Wednesday 14 May 2025 14:14:22 +0000 (0:00:00.072) 0:00:00.072 ********* 2025-05-14 14:14:23.999781 | orchestrator | ok: [testbed-manager] 2025-05-14 14:14:25.091840 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:25.092161 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:25.092910 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:25.094758 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:25.097038 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:25.098299 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:25.098591 | orchestrator | 2025-05-14 14:14:25.099382 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-14 14:14:25.100002 | orchestrator | Wednesday 14 May 2025 14:14:25 +0000 (0:00:02.233) 0:00:02.306 ********* 2025-05-14 14:14:26.154237 | orchestrator | ok: [testbed-manager] 2025-05-14 14:14:27.018682 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:27.018919 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:27.021709 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:27.022435 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:14:27.023163 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:14:27.023859 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:14:27.024360 | orchestrator | 2025-05-14 14:14:27.024945 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-14 14:14:27.025386 | orchestrator | 2025-05-14 14:14:27.026085 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 14:14:27.026597 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:01.927) 0:00:04.233 ********* 2025-05-14 14:14:27.136930 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:27.137104 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:27.137121 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:27.138966 | orchestrator | 2025-05-14 14:14:27.140377 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 14:14:27.140751 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:00.120) 0:00:04.353 ********* 2025-05-14 14:14:27.287516 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:27.290200 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:27.291895 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:27.292488 | orchestrator | 2025-05-14 14:14:27.292708 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 14:14:27.293050 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:00.145) 0:00:04.498 ********* 2025-05-14 14:14:27.409573 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:27.409689 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:27.409712 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:27.409850 | orchestrator | 2025-05-14 14:14:27.409868 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 14:14:27.410587 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:00.126) 0:00:04.625 ********* 2025-05-14 14:14:27.547789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:14:27.548331 | orchestrator | 2025-05-14 14:14:27.548701 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 14:14:27.552256 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:00.137) 0:00:04.763 ********* 2025-05-14 14:14:27.966346 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:27.966631 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:27.967169 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:27.968362 | orchestrator | 2025-05-14 14:14:27.969281 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 14:14:27.971109 | orchestrator | Wednesday 14 May 2025 14:14:27 +0000 (0:00:00.418) 0:00:05.181 ********* 2025-05-14 14:14:28.081361 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:28.081576 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:28.081663 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:28.082200 | orchestrator | 2025-05-14 14:14:28.082549 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 14:14:28.083001 | orchestrator | Wednesday 14 May 2025 14:14:28 +0000 (0:00:00.110) 0:00:05.292 ********* 2025-05-14 14:14:29.027127 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:29.027592 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:29.028090 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:29.028510 | orchestrator | 2025-05-14 14:14:29.032119 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 14:14:29.032516 | orchestrator | Wednesday 14 May 2025 14:14:29 +0000 (0:00:00.949) 0:00:06.242 ********* 2025-05-14 14:14:29.473874 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:29.474001 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:29.474071 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:29.474308 | orchestrator | 2025-05-14 14:14:29.474531 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 14:14:29.474909 | orchestrator | Wednesday 14 May 2025 14:14:29 +0000 (0:00:00.446) 0:00:06.689 ********* 2025-05-14 14:14:30.529378 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:30.531378 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:30.531970 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:30.534545 | orchestrator | 2025-05-14 14:14:30.535039 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 14:14:30.535276 | orchestrator | Wednesday 14 May 2025 14:14:30 +0000 (0:00:01.050) 0:00:07.739 ********* 2025-05-14 14:14:43.969334 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:43.969475 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:43.969488 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:43.969496 | orchestrator | 2025-05-14 14:14:43.969504 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-14 14:14:43.969555 | orchestrator | Wednesday 14 May 2025 14:14:43 +0000 (0:00:13.433) 0:00:21.172 ********* 2025-05-14 14:14:44.043598 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:14:44.097699 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:14:44.097777 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:14:44.098259 | orchestrator | 2025-05-14 14:14:44.098280 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-14 14:14:44.100980 | orchestrator | Wednesday 14 May 2025 14:14:44 +0000 (0:00:00.139) 0:00:21.312 ********* 2025-05-14 14:14:51.464808 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:14:51.465149 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:14:51.466511 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:14:51.466647 | orchestrator | 2025-05-14 14:14:51.467509 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 14:14:51.468255 | orchestrator | Wednesday 14 May 2025 14:14:51 +0000 (0:00:07.366) 0:00:28.679 ********* 2025-05-14 14:14:51.930102 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:51.931139 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:51.931169 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:51.931853 | orchestrator | 2025-05-14 14:14:51.932489 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 14:14:51.933417 | orchestrator | Wednesday 14 May 2025 14:14:51 +0000 (0:00:00.466) 0:00:29.145 ********* 2025-05-14 14:14:55.430216 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-14 14:14:55.431384 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-14 14:14:55.432515 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-14 14:14:55.433545 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-14 14:14:55.435641 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-14 14:14:55.437194 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-14 14:14:55.438548 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-14 14:14:55.439214 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-14 14:14:55.440629 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-14 14:14:55.441026 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-14 14:14:55.442009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-14 14:14:55.442788 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-14 14:14:55.443352 | orchestrator | 2025-05-14 14:14:55.443945 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 14:14:55.444442 | orchestrator | Wednesday 14 May 2025 14:14:55 +0000 (0:00:03.498) 0:00:32.643 ********* 2025-05-14 14:14:56.542564 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:14:56.542665 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:14:56.542893 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:14:56.544265 | orchestrator | 2025-05-14 14:14:56.545001 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 14:14:56.546301 | orchestrator | 2025-05-14 14:14:56.547459 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:14:56.548437 | orchestrator | Wednesday 14 May 2025 14:14:56 +0000 (0:00:01.111) 0:00:33.755 ********* 2025-05-14 14:14:58.236776 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:01.449809 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:01.450620 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:01.452011 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:01.453233 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:01.454845 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:01.455863 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:01.457222 | orchestrator | 2025-05-14 14:15:01.458094 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:15:01.459393 | orchestrator | 2025-05-14 14:15:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:15:01.459428 | orchestrator | 2025-05-14 14:15:01 | INFO  | Please wait and do not abort execution. 2025-05-14 14:15:01.460145 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:15:01.460778 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:15:01.461327 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:15:01.461759 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:15:01.462217 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:15:01.462772 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:15:01.463438 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:15:01.463699 | orchestrator | 2025-05-14 14:15:01.464153 | orchestrator | Wednesday 14 May 2025 14:15:01 +0000 (0:00:04.909) 0:00:38.664 ********* 2025-05-14 14:15:01.464562 | orchestrator | =============================================================================== 2025-05-14 14:15:01.464932 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.43s 2025-05-14 14:15:01.465263 | orchestrator | Install required packages (Debian) -------------------------------------- 7.37s 2025-05-14 14:15:01.465873 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2025-05-14 14:15:01.466133 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2025-05-14 14:15:01.466522 | orchestrator | Create custom facts directory ------------------------------------------- 2.23s 2025-05-14 14:15:01.466921 | orchestrator | Copy fact file ---------------------------------------------------------- 1.93s 2025-05-14 14:15:01.467366 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.11s 2025-05-14 14:15:01.467741 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-05-14 14:15:01.468371 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.95s 2025-05-14 14:15:01.468548 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-05-14 14:15:01.468967 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-14 14:15:01.469315 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-05-14 14:15:01.469651 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-05-14 14:15:01.470086 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2025-05-14 14:15:01.470780 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-05-14 14:15:01.471234 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-05-14 14:15:01.471567 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-14 14:15:01.471927 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-05-14 14:15:01.870972 | orchestrator | + osism apply bootstrap 2025-05-14 14:15:03.236218 | orchestrator | 2025-05-14 14:15:03 | INFO  | Task dd9cb16a-ced3-43fc-8771-b1308b2f94a5 (bootstrap) was prepared for execution. 2025-05-14 14:15:03.236312 | orchestrator | 2025-05-14 14:15:03 | INFO  | It takes a moment until task dd9cb16a-ced3-43fc-8771-b1308b2f94a5 (bootstrap) has been started and output is visible here. 2025-05-14 14:15:06.346852 | orchestrator | 2025-05-14 14:15:06.349617 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-14 14:15:06.349703 | orchestrator | 2025-05-14 14:15:06.349717 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-14 14:15:06.350624 | orchestrator | Wednesday 14 May 2025 14:15:06 +0000 (0:00:00.106) 0:00:00.106 ********* 2025-05-14 14:15:06.424297 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:06.448522 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:06.472990 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:06.501535 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:06.571673 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:06.575226 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:06.575258 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:06.575270 | orchestrator | 2025-05-14 14:15:06.575396 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 14:15:06.575492 | orchestrator | 2025-05-14 14:15:06.576520 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:15:06.578227 | orchestrator | Wednesday 14 May 2025 14:15:06 +0000 (0:00:00.228) 0:00:00.335 ********* 2025-05-14 14:15:10.315830 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:10.316456 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:10.317421 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:10.317976 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:10.319120 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:10.319909 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:10.320822 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:10.321788 | orchestrator | 2025-05-14 14:15:10.322471 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-14 14:15:10.323089 | orchestrator | 2025-05-14 14:15:10.324110 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:15:10.324524 | orchestrator | Wednesday 14 May 2025 14:15:10 +0000 (0:00:03.744) 0:00:04.080 ********* 2025-05-14 14:15:10.408298 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 14:15:10.408678 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 14:15:10.408702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-14 14:15:10.452302 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 14:15:10.453560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:15:10.453594 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-14 14:15:10.454128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:15:10.454689 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 14:15:10.455188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:15:10.457076 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:15:10.497209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-14 14:15:10.497385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:15:10.497615 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:15:10.498100 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 14:15:10.498296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:15:10.498652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:15:10.500863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:15:10.741061 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 14:15:10.741873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:15:10.742184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-14 14:15:10.743033 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:15:10.743510 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:15:10.744033 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 14:15:10.744446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:15:10.745310 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:10.745839 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:10.746278 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:15:10.747470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:15:10.748414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:15:10.748668 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:10.749233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-14 14:15:10.749866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:15:10.750507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:15:10.751015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:15:10.751937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-14 14:15:10.752395 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:15:10.753005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:15:10.754612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:15:10.756044 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:15:10.756492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:15:10.757167 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:10.757505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:15:10.758144 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:15:10.758573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:15:10.759190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:15:10.759725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:15:10.760273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 14:15:10.760651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:15:10.762689 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:10.763181 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 14:15:10.764169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 14:15:10.764795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 14:15:10.765672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 14:15:10.767307 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:10.768725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 14:15:10.769156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:10.770013 | orchestrator | 2025-05-14 14:15:10.770536 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-14 14:15:10.771271 | orchestrator | 2025-05-14 14:15:10.772270 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-14 14:15:10.773618 | orchestrator | Wednesday 14 May 2025 14:15:10 +0000 (0:00:00.424) 0:00:04.504 ********* 2025-05-14 14:15:10.813679 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:10.849492 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:10.865659 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:10.889527 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:10.941206 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:10.941478 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:10.943079 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:10.943918 | orchestrator | 2025-05-14 14:15:10.945004 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-14 14:15:10.945719 | orchestrator | Wednesday 14 May 2025 14:15:10 +0000 (0:00:00.199) 0:00:04.704 ********* 2025-05-14 14:15:12.170285 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:12.171363 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:12.171783 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:12.172664 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:12.173051 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:12.173449 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:12.173998 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:12.174500 | orchestrator | 2025-05-14 14:15:12.175025 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-14 14:15:12.177076 | orchestrator | Wednesday 14 May 2025 14:15:12 +0000 (0:00:01.229) 0:00:05.933 ********* 2025-05-14 14:15:13.374279 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:13.374458 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:13.375079 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:13.376490 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:13.377570 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:13.378414 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:13.379261 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:13.380110 | orchestrator | 2025-05-14 14:15:13.381111 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-14 14:15:13.382400 | orchestrator | Wednesday 14 May 2025 14:15:13 +0000 (0:00:01.198) 0:00:07.132 ********* 2025-05-14 14:15:13.637402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:13.637496 | orchestrator | 2025-05-14 14:15:13.638078 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-14 14:15:13.638254 | orchestrator | Wednesday 14 May 2025 14:15:13 +0000 (0:00:00.269) 0:00:07.401 ********* 2025-05-14 14:15:15.909873 | orchestrator | changed: [testbed-manager] 2025-05-14 14:15:15.910474 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:15.911031 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:15.912020 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:15.913477 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:15.914201 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:15.914946 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:15.915655 | orchestrator | 2025-05-14 14:15:15.917032 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-14 14:15:15.917061 | orchestrator | Wednesday 14 May 2025 14:15:15 +0000 (0:00:02.270) 0:00:09.671 ********* 2025-05-14 14:15:15.981987 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:16.132055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:16.132923 | orchestrator | 2025-05-14 14:15:16.136159 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-14 14:15:16.136192 | orchestrator | Wednesday 14 May 2025 14:15:16 +0000 (0:00:00.224) 0:00:09.896 ********* 2025-05-14 14:15:17.120204 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:17.122197 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:17.122274 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:17.123672 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:17.124236 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:17.125230 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:17.125953 | orchestrator | 2025-05-14 14:15:17.126436 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-14 14:15:17.127293 | orchestrator | Wednesday 14 May 2025 14:15:17 +0000 (0:00:00.986) 0:00:10.882 ********* 2025-05-14 14:15:17.184201 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:17.679952 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:17.680640 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:17.681751 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:17.682784 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:17.684375 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:17.685036 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:17.685986 | orchestrator | 2025-05-14 14:15:17.686523 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-14 14:15:17.687042 | orchestrator | Wednesday 14 May 2025 14:15:17 +0000 (0:00:00.560) 0:00:11.442 ********* 2025-05-14 14:15:17.772260 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:17.795428 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:17.822101 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:18.093503 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:18.094583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:18.095526 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:18.096466 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:18.097475 | orchestrator | 2025-05-14 14:15:18.098513 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 14:15:18.100476 | orchestrator | Wednesday 14 May 2025 14:15:18 +0000 (0:00:00.412) 0:00:11.855 ********* 2025-05-14 14:15:18.160809 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:18.185751 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:18.208151 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:18.233613 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:18.302984 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:18.305741 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:18.306783 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:18.308020 | orchestrator | 2025-05-14 14:15:18.308794 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 14:15:18.309443 | orchestrator | Wednesday 14 May 2025 14:15:18 +0000 (0:00:00.211) 0:00:12.066 ********* 2025-05-14 14:15:18.554908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:18.555472 | orchestrator | 2025-05-14 14:15:18.556065 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 14:15:18.557085 | orchestrator | Wednesday 14 May 2025 14:15:18 +0000 (0:00:00.252) 0:00:12.318 ********* 2025-05-14 14:15:18.818639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:18.819449 | orchestrator | 2025-05-14 14:15:18.820094 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 14:15:18.821145 | orchestrator | Wednesday 14 May 2025 14:15:18 +0000 (0:00:00.263) 0:00:12.582 ********* 2025-05-14 14:15:20.093925 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:20.094807 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:20.096883 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:20.098360 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:20.099097 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:20.100160 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:20.100730 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:20.101987 | orchestrator | 2025-05-14 14:15:20.102455 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 14:15:20.102800 | orchestrator | Wednesday 14 May 2025 14:15:20 +0000 (0:00:01.273) 0:00:13.856 ********* 2025-05-14 14:15:20.166815 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:20.188981 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:20.216135 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:20.246808 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:20.311875 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:20.313269 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:20.314570 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:20.315609 | orchestrator | 2025-05-14 14:15:20.316660 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 14:15:20.317312 | orchestrator | Wednesday 14 May 2025 14:15:20 +0000 (0:00:00.219) 0:00:14.075 ********* 2025-05-14 14:15:20.834071 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:20.834187 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:20.834259 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:20.835037 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:20.835574 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:20.836238 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:20.837143 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:20.837692 | orchestrator | 2025-05-14 14:15:20.837893 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 14:15:20.838985 | orchestrator | Wednesday 14 May 2025 14:15:20 +0000 (0:00:00.521) 0:00:14.597 ********* 2025-05-14 14:15:20.910576 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:20.931834 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:20.983037 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:21.053625 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:21.053961 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:21.056978 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:21.057004 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:21.057017 | orchestrator | 2025-05-14 14:15:21.057031 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 14:15:21.057072 | orchestrator | Wednesday 14 May 2025 14:15:21 +0000 (0:00:00.219) 0:00:14.816 ********* 2025-05-14 14:15:21.563043 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:21.563778 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:21.564037 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:21.564650 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:21.565135 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:21.565454 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:21.565904 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:21.566380 | orchestrator | 2025-05-14 14:15:21.566688 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 14:15:21.567085 | orchestrator | Wednesday 14 May 2025 14:15:21 +0000 (0:00:00.505) 0:00:15.322 ********* 2025-05-14 14:15:22.623609 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:22.624125 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:22.626967 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:22.626992 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:22.627004 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:22.627015 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:22.627695 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:22.628779 | orchestrator | 2025-05-14 14:15:22.629763 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 14:15:22.630247 | orchestrator | Wednesday 14 May 2025 14:15:22 +0000 (0:00:01.063) 0:00:16.385 ********* 2025-05-14 14:15:24.699047 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:24.699242 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:24.699929 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:24.700801 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:24.702736 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:24.703860 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:24.705454 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:24.710227 | orchestrator | 2025-05-14 14:15:24.710358 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 14:15:24.711155 | orchestrator | Wednesday 14 May 2025 14:15:24 +0000 (0:00:02.074) 0:00:18.460 ********* 2025-05-14 14:15:24.974823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:24.975438 | orchestrator | 2025-05-14 14:15:24.976088 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 14:15:24.977093 | orchestrator | Wednesday 14 May 2025 14:15:24 +0000 (0:00:00.277) 0:00:18.737 ********* 2025-05-14 14:15:25.043915 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:26.383823 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:26.383954 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:26.383969 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:26.384148 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:26.387891 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:26.387923 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:26.387935 | orchestrator | 2025-05-14 14:15:26.387950 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 14:15:26.388027 | orchestrator | Wednesday 14 May 2025 14:15:26 +0000 (0:00:01.407) 0:00:20.145 ********* 2025-05-14 14:15:26.452904 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:26.481286 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:26.502083 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:26.530307 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:26.579899 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:26.579988 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:26.580544 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:26.582554 | orchestrator | 2025-05-14 14:15:26.582607 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 14:15:26.582652 | orchestrator | Wednesday 14 May 2025 14:15:26 +0000 (0:00:00.197) 0:00:20.343 ********* 2025-05-14 14:15:26.653735 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:26.673215 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:26.700156 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:26.721527 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:26.786566 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:26.787061 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:26.790527 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:26.791356 | orchestrator | 2025-05-14 14:15:26.792267 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 14:15:26.792867 | orchestrator | Wednesday 14 May 2025 14:15:26 +0000 (0:00:00.206) 0:00:20.549 ********* 2025-05-14 14:15:26.856376 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:26.878152 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:26.900798 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:26.920868 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:26.973371 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:26.974085 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:26.975236 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:26.975946 | orchestrator | 2025-05-14 14:15:26.977431 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 14:15:26.978280 | orchestrator | Wednesday 14 May 2025 14:15:26 +0000 (0:00:00.187) 0:00:20.737 ********* 2025-05-14 14:15:27.225516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:27.225693 | orchestrator | 2025-05-14 14:15:27.226685 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 14:15:27.229937 | orchestrator | Wednesday 14 May 2025 14:15:27 +0000 (0:00:00.251) 0:00:20.988 ********* 2025-05-14 14:15:27.752776 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:27.754268 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:27.754339 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:27.754769 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:27.755715 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:27.756427 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:27.757303 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:27.757889 | orchestrator | 2025-05-14 14:15:27.759093 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 14:15:27.759224 | orchestrator | Wednesday 14 May 2025 14:15:27 +0000 (0:00:00.526) 0:00:21.515 ********* 2025-05-14 14:15:27.856420 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:27.878179 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:27.900644 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:27.962155 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:27.963207 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:27.963415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:27.964948 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:27.965106 | orchestrator | 2025-05-14 14:15:27.966127 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 14:15:27.966451 | orchestrator | Wednesday 14 May 2025 14:15:27 +0000 (0:00:00.209) 0:00:21.725 ********* 2025-05-14 14:15:29.002229 | orchestrator | changed: [testbed-manager] 2025-05-14 14:15:29.002763 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:29.004342 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:29.005199 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:29.005936 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:29.007041 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:29.008458 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:29.009586 | orchestrator | 2025-05-14 14:15:29.009869 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 14:15:29.010920 | orchestrator | Wednesday 14 May 2025 14:15:28 +0000 (0:00:01.037) 0:00:22.763 ********* 2025-05-14 14:15:29.536790 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:29.536945 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:29.538618 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:29.539212 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:29.539239 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:29.540346 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:29.543419 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:29.543521 | orchestrator | 2025-05-14 14:15:29.543548 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 14:15:29.543571 | orchestrator | Wednesday 14 May 2025 14:15:29 +0000 (0:00:00.537) 0:00:23.300 ********* 2025-05-14 14:15:30.606376 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:30.606560 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:30.608196 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:30.608764 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:30.610456 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:30.611039 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:30.612586 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:30.612694 | orchestrator | 2025-05-14 14:15:30.613516 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 14:15:30.614178 | orchestrator | Wednesday 14 May 2025 14:15:30 +0000 (0:00:01.067) 0:00:24.368 ********* 2025-05-14 14:15:43.981861 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:43.982127 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:43.982152 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:43.982164 | orchestrator | changed: [testbed-manager] 2025-05-14 14:15:43.982268 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:43.985604 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:43.985919 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:43.986509 | orchestrator | 2025-05-14 14:15:43.987212 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-14 14:15:43.987890 | orchestrator | Wednesday 14 May 2025 14:15:43 +0000 (0:00:13.370) 0:00:37.739 ********* 2025-05-14 14:15:44.059830 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:44.097209 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:44.121197 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:44.142677 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:44.210659 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:44.211006 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:44.211399 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:44.212488 | orchestrator | 2025-05-14 14:15:44.212530 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-14 14:15:44.212552 | orchestrator | Wednesday 14 May 2025 14:15:44 +0000 (0:00:00.234) 0:00:37.973 ********* 2025-05-14 14:15:44.316259 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:44.346284 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:44.371261 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:44.393177 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:44.458375 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:44.458670 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:44.459469 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:44.460172 | orchestrator | 2025-05-14 14:15:44.463965 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-14 14:15:44.463997 | orchestrator | Wednesday 14 May 2025 14:15:44 +0000 (0:00:00.247) 0:00:38.221 ********* 2025-05-14 14:15:44.530994 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:44.587470 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:44.612444 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:44.675615 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:44.676398 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:44.676741 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:44.678053 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:44.678994 | orchestrator | 2025-05-14 14:15:44.680355 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-14 14:15:44.681218 | orchestrator | Wednesday 14 May 2025 14:15:44 +0000 (0:00:00.217) 0:00:38.439 ********* 2025-05-14 14:15:44.990195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:44.993085 | orchestrator | 2025-05-14 14:15:44.993125 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-14 14:15:44.993854 | orchestrator | Wednesday 14 May 2025 14:15:44 +0000 (0:00:00.312) 0:00:38.752 ********* 2025-05-14 14:15:46.500482 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:46.500590 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:46.500668 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:46.500798 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:46.501129 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:46.501219 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:46.502967 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:46.503057 | orchestrator | 2025-05-14 14:15:46.503128 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-14 14:15:46.504109 | orchestrator | Wednesday 14 May 2025 14:15:46 +0000 (0:00:01.508) 0:00:40.261 ********* 2025-05-14 14:15:47.492785 | orchestrator | changed: [testbed-manager] 2025-05-14 14:15:47.493001 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:47.493473 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:47.493815 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:47.494276 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:47.495184 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:47.495209 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:47.495624 | orchestrator | 2025-05-14 14:15:47.496053 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-14 14:15:47.496357 | orchestrator | Wednesday 14 May 2025 14:15:47 +0000 (0:00:00.994) 0:00:41.255 ********* 2025-05-14 14:15:48.256981 | orchestrator | ok: [testbed-manager] 2025-05-14 14:15:48.257197 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:15:48.257800 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:15:48.258264 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:15:48.260819 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:15:48.261010 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:15:48.261804 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:15:48.262899 | orchestrator | 2025-05-14 14:15:48.264470 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-14 14:15:48.265150 | orchestrator | Wednesday 14 May 2025 14:15:48 +0000 (0:00:00.764) 0:00:42.019 ********* 2025-05-14 14:15:48.530569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:15:48.530673 | orchestrator | 2025-05-14 14:15:48.530805 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-14 14:15:48.531229 | orchestrator | Wednesday 14 May 2025 14:15:48 +0000 (0:00:00.274) 0:00:42.294 ********* 2025-05-14 14:15:49.511733 | orchestrator | changed: [testbed-manager] 2025-05-14 14:15:49.511919 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:15:49.512061 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:15:49.512091 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:15:49.512316 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:15:49.513846 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:15:49.514974 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:15:49.515788 | orchestrator | 2025-05-14 14:15:49.517034 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-14 14:15:49.517633 | orchestrator | Wednesday 14 May 2025 14:15:49 +0000 (0:00:00.979) 0:00:43.273 ********* 2025-05-14 14:15:49.606103 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:15:49.633760 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:15:49.648489 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:15:49.785161 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:15:49.785270 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:15:49.785467 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:15:49.786424 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:15:49.788170 | orchestrator | 2025-05-14 14:15:49.788439 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-14 14:15:49.788821 | orchestrator | Wednesday 14 May 2025 14:15:49 +0000 (0:00:00.274) 0:00:43.548 ********* 2025-05-14 14:16:00.966900 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:16:00.967003 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:16:00.967012 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:16:00.967066 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:16:00.967826 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:16:00.969788 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:16:00.970398 | orchestrator | changed: [testbed-manager] 2025-05-14 14:16:00.970682 | orchestrator | 2025-05-14 14:16:00.971417 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-14 14:16:00.972141 | orchestrator | Wednesday 14 May 2025 14:16:00 +0000 (0:00:11.178) 0:00:54.727 ********* 2025-05-14 14:16:01.655953 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:01.656125 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:01.658534 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:01.659132 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:01.662108 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:01.662493 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:01.663022 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:01.663452 | orchestrator | 2025-05-14 14:16:01.663918 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-14 14:16:01.664317 | orchestrator | Wednesday 14 May 2025 14:16:01 +0000 (0:00:00.691) 0:00:55.419 ********* 2025-05-14 14:16:02.498250 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:02.498678 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:02.499513 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:02.503116 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:02.503164 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:02.503181 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:02.503193 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:02.503206 | orchestrator | 2025-05-14 14:16:02.503831 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-14 14:16:02.503906 | orchestrator | Wednesday 14 May 2025 14:16:02 +0000 (0:00:00.842) 0:00:56.261 ********* 2025-05-14 14:16:02.571223 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:02.597013 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:02.624678 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:02.650179 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:02.706223 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:02.706632 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:02.707323 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:02.708165 | orchestrator | 2025-05-14 14:16:02.708958 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-14 14:16:02.709101 | orchestrator | Wednesday 14 May 2025 14:16:02 +0000 (0:00:00.208) 0:00:56.469 ********* 2025-05-14 14:16:02.773700 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:02.797907 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:02.821606 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:02.849067 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:02.897595 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:02.898362 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:02.898497 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:02.899079 | orchestrator | 2025-05-14 14:16:02.899550 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-14 14:16:02.900027 | orchestrator | Wednesday 14 May 2025 14:16:02 +0000 (0:00:00.192) 0:00:56.662 ********* 2025-05-14 14:16:03.176618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:16:03.178407 | orchestrator | 2025-05-14 14:16:03.179317 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-14 14:16:03.180230 | orchestrator | Wednesday 14 May 2025 14:16:03 +0000 (0:00:00.277) 0:00:56.939 ********* 2025-05-14 14:16:04.725189 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:04.725680 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:04.728443 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:04.732944 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:04.734063 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:04.734072 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:04.734608 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:04.734991 | orchestrator | 2025-05-14 14:16:04.735383 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-14 14:16:04.735768 | orchestrator | Wednesday 14 May 2025 14:16:04 +0000 (0:00:01.547) 0:00:58.486 ********* 2025-05-14 14:16:05.272538 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:16:05.272668 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:16:05.272683 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:16:05.273857 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:16:05.273881 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:16:05.273893 | orchestrator | changed: [testbed-manager] 2025-05-14 14:16:05.273937 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:16:05.273950 | orchestrator | 2025-05-14 14:16:05.274011 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-14 14:16:05.274149 | orchestrator | Wednesday 14 May 2025 14:16:05 +0000 (0:00:00.547) 0:00:59.034 ********* 2025-05-14 14:16:05.366055 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:05.392912 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:05.416087 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:05.485756 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:05.487981 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:05.488012 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:05.488024 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:05.488165 | orchestrator | 2025-05-14 14:16:05.489917 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-14 14:16:05.490958 | orchestrator | Wednesday 14 May 2025 14:16:05 +0000 (0:00:00.215) 0:00:59.249 ********* 2025-05-14 14:16:06.562590 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:06.562704 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:06.563139 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:06.563928 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:06.564678 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:06.565506 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:06.566006 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:06.566773 | orchestrator | 2025-05-14 14:16:06.567482 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-14 14:16:06.567965 | orchestrator | Wednesday 14 May 2025 14:16:06 +0000 (0:00:01.073) 0:01:00.323 ********* 2025-05-14 14:16:08.065810 | orchestrator | changed: [testbed-manager] 2025-05-14 14:16:08.066077 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:16:08.066978 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:16:08.068216 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:16:08.068598 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:16:08.069824 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:16:08.070378 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:16:08.071023 | orchestrator | 2025-05-14 14:16:08.071618 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-14 14:16:08.072745 | orchestrator | Wednesday 14 May 2025 14:16:08 +0000 (0:00:01.503) 0:01:01.827 ********* 2025-05-14 14:16:10.208522 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:10.208622 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:10.209839 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:10.209868 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:10.211229 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:10.212284 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:10.213185 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:10.216081 | orchestrator | 2025-05-14 14:16:10.216803 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-14 14:16:10.217492 | orchestrator | Wednesday 14 May 2025 14:16:10 +0000 (0:00:02.142) 0:01:03.970 ********* 2025-05-14 14:16:48.411640 | orchestrator | ok: [testbed-manager] 2025-05-14 14:16:48.411789 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:16:48.411810 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:16:48.412423 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:16:48.413526 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:16:48.414232 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:16:48.414776 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:16:48.415505 | orchestrator | 2025-05-14 14:16:48.416300 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-14 14:16:48.416823 | orchestrator | Wednesday 14 May 2025 14:16:48 +0000 (0:00:38.199) 0:01:42.170 ********* 2025-05-14 14:18:10.383826 | orchestrator | changed: [testbed-manager] 2025-05-14 14:18:10.383961 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:18:10.383972 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:18:10.383979 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:18:10.383986 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:18:10.384033 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:18:10.384641 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:18:10.385713 | orchestrator | 2025-05-14 14:18:10.385928 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-14 14:18:10.386510 | orchestrator | Wednesday 14 May 2025 14:18:10 +0000 (0:01:21.969) 0:03:04.139 ********* 2025-05-14 14:18:11.854428 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:11.855914 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:11.855948 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:11.855960 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:11.858385 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:11.859071 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:11.859230 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:11.860443 | orchestrator | 2025-05-14 14:18:11.861688 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-14 14:18:11.861978 | orchestrator | Wednesday 14 May 2025 14:18:11 +0000 (0:00:01.476) 0:03:05.616 ********* 2025-05-14 14:18:23.773819 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:23.773925 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:23.773937 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:23.773945 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:23.774005 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:23.776591 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:23.777386 | orchestrator | changed: [testbed-manager] 2025-05-14 14:18:23.778335 | orchestrator | 2025-05-14 14:18:23.779222 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-14 14:18:23.779670 | orchestrator | Wednesday 14 May 2025 14:18:23 +0000 (0:00:11.912) 0:03:17.528 ********* 2025-05-14 14:18:24.203346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-14 14:18:24.204110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-14 14:18:24.205554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-14 14:18:24.206420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-14 14:18:24.207056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-14 14:18:24.208441 | orchestrator | 2025-05-14 14:18:24.208833 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-14 14:18:24.209704 | orchestrator | Wednesday 14 May 2025 14:18:24 +0000 (0:00:00.437) 0:03:17.965 ********* 2025-05-14 14:18:24.273605 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 14:18:24.273716 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 14:18:24.303693 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:24.304047 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 14:18:24.346383 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:18:24.347359 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 14:18:24.373723 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:18:24.396862 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:18:24.953556 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:18:24.954104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:18:24.955327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:18:24.956707 | orchestrator | 2025-05-14 14:18:24.957026 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-14 14:18:24.957974 | orchestrator | Wednesday 14 May 2025 14:18:24 +0000 (0:00:00.748) 0:03:18.714 ********* 2025-05-14 14:18:25.006149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 14:18:25.006251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 14:18:25.041560 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 14:18:25.041726 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 14:18:25.042758 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 14:18:25.043237 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 14:18:25.044000 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 14:18:25.044990 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 14:18:25.045704 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 14:18:25.046173 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 14:18:25.047014 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 14:18:25.047527 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 14:18:25.048365 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 14:18:25.049677 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 14:18:25.049699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 14:18:25.050006 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 14:18:25.053804 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 14:18:25.053837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 14:18:25.053848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 14:18:25.053859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 14:18:25.074404 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:25.076314 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 14:18:25.077170 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 14:18:25.077903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 14:18:25.080009 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 14:18:25.082255 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 14:18:25.083174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 14:18:25.083443 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 14:18:25.111462 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:18:25.111977 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 14:18:25.112957 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 14:18:25.114432 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 14:18:25.114982 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 14:18:25.115693 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 14:18:25.116238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 14:18:25.116939 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 14:18:25.117587 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 14:18:25.118263 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 14:18:25.118898 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 14:18:25.119761 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 14:18:25.120416 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 14:18:25.121027 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 14:18:25.136153 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:18:28.549045 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:18:28.549504 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 14:18:28.549723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 14:18:28.550361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 14:18:28.552453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 14:18:28.552955 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 14:18:28.553973 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 14:18:28.555053 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 14:18:28.555548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 14:18:28.556076 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 14:18:28.556622 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 14:18:28.557382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 14:18:28.557945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 14:18:28.558750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 14:18:28.559206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 14:18:28.559928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 14:18:28.560258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 14:18:28.560897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 14:18:28.561190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 14:18:28.561886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 14:18:28.562203 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 14:18:28.562708 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 14:18:28.563004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 14:18:28.565275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 14:18:28.565705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 14:18:28.566476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 14:18:28.566931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 14:18:28.567319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 14:18:28.567682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 14:18:28.568254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 14:18:28.568562 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 14:18:28.569604 | orchestrator | 2025-05-14 14:18:28.570392 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-14 14:18:28.571134 | orchestrator | Wednesday 14 May 2025 14:18:28 +0000 (0:00:03.595) 0:03:22.309 ********* 2025-05-14 14:18:30.134788 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.136592 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.141009 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.141828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.142991 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.144307 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.145979 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 14:18:30.146058 | orchestrator | 2025-05-14 14:18:30.146494 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-14 14:18:30.147110 | orchestrator | Wednesday 14 May 2025 14:18:30 +0000 (0:00:01.586) 0:03:23.896 ********* 2025-05-14 14:18:30.190950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 14:18:30.218091 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:30.289349 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 14:18:30.617365 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 14:18:30.618416 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:18:30.618961 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:18:30.619632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 14:18:30.620334 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:18:30.621356 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 14:18:30.621874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 14:18:30.622466 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 14:18:30.622820 | orchestrator | 2025-05-14 14:18:30.623547 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-14 14:18:30.624113 | orchestrator | Wednesday 14 May 2025 14:18:30 +0000 (0:00:00.483) 0:03:24.380 ********* 2025-05-14 14:18:30.678324 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 14:18:30.700072 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:30.776378 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 14:18:31.179857 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:18:31.180066 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 14:18:31.180399 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:18:31.181395 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 14:18:31.181639 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:18:31.181897 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 14:18:31.182520 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 14:18:31.185927 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 14:18:31.185980 | orchestrator | 2025-05-14 14:18:31.185996 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-14 14:18:31.186009 | orchestrator | Wednesday 14 May 2025 14:18:31 +0000 (0:00:00.562) 0:03:24.943 ********* 2025-05-14 14:18:31.272108 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:31.300000 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:18:31.335465 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:18:31.361169 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:18:31.481776 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:18:31.482194 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:18:31.482328 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:18:31.482876 | orchestrator | 2025-05-14 14:18:31.483329 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-14 14:18:31.483686 | orchestrator | Wednesday 14 May 2025 14:18:31 +0000 (0:00:00.302) 0:03:25.245 ********* 2025-05-14 14:18:38.055552 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:38.056024 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:38.056646 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:38.057632 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:38.059720 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:38.060772 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:38.061136 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:38.061663 | orchestrator | 2025-05-14 14:18:38.062533 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-14 14:18:38.062855 | orchestrator | Wednesday 14 May 2025 14:18:38 +0000 (0:00:06.572) 0:03:31.818 ********* 2025-05-14 14:18:38.137561 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-14 14:18:38.137737 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-14 14:18:38.186607 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:38.187059 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-14 14:18:38.227792 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:18:38.263340 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-14 14:18:38.263594 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:18:38.315248 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-14 14:18:38.315336 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:18:38.315978 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-14 14:18:38.402963 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:18:38.403478 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:18:38.404328 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-14 14:18:38.404699 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:18:38.405547 | orchestrator | 2025-05-14 14:18:38.405809 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-14 14:18:38.406541 | orchestrator | Wednesday 14 May 2025 14:18:38 +0000 (0:00:00.346) 0:03:32.164 ********* 2025-05-14 14:18:39.476172 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-14 14:18:39.476729 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-14 14:18:39.478620 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-14 14:18:39.478943 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-14 14:18:39.479488 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-14 14:18:39.480147 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-14 14:18:39.480762 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-14 14:18:39.482174 | orchestrator | 2025-05-14 14:18:39.482197 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-14 14:18:39.482215 | orchestrator | Wednesday 14 May 2025 14:18:39 +0000 (0:00:01.073) 0:03:33.238 ********* 2025-05-14 14:18:39.869887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:18:39.872088 | orchestrator | 2025-05-14 14:18:39.873654 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-14 14:18:39.874600 | orchestrator | Wednesday 14 May 2025 14:18:39 +0000 (0:00:00.393) 0:03:33.631 ********* 2025-05-14 14:18:41.175042 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:41.175159 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:41.176153 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:41.177051 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:41.177507 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:41.178845 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:41.179806 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:41.180249 | orchestrator | 2025-05-14 14:18:41.181028 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-14 14:18:41.181636 | orchestrator | Wednesday 14 May 2025 14:18:41 +0000 (0:00:01.304) 0:03:34.936 ********* 2025-05-14 14:18:41.842506 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:41.843463 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:41.844374 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:41.845257 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:41.846145 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:41.846453 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:41.847644 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:41.848425 | orchestrator | 2025-05-14 14:18:41.849357 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-14 14:18:41.850100 | orchestrator | Wednesday 14 May 2025 14:18:41 +0000 (0:00:00.669) 0:03:35.606 ********* 2025-05-14 14:18:42.503552 | orchestrator | changed: [testbed-manager] 2025-05-14 14:18:42.504428 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:18:42.504856 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:18:42.506306 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:18:42.508887 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:18:42.510678 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:18:42.510702 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:18:42.511460 | orchestrator | 2025-05-14 14:18:42.513858 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-14 14:18:42.514269 | orchestrator | Wednesday 14 May 2025 14:18:42 +0000 (0:00:00.657) 0:03:36.264 ********* 2025-05-14 14:18:43.086430 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:43.086940 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:43.087955 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:43.088293 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:43.090265 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:43.090915 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:43.092011 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:43.092425 | orchestrator | 2025-05-14 14:18:43.093061 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-14 14:18:43.093461 | orchestrator | Wednesday 14 May 2025 14:18:43 +0000 (0:00:00.584) 0:03:36.849 ********* 2025-05-14 14:18:44.009747 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230692.6987703, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.010138 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230730.7437904, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.010169 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230728.9416962, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.010204 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230721.084808, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.010564 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230737.1745024, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.010822 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230718.9790542, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.011059 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747230723.2093754, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.011656 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230727.5222023, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.011938 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230648.6169007, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.012158 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230657.5092804, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.012694 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230649.934303, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.012781 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230644.3860574, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.013212 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230645.6677303, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.013743 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747230645.633592, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:18:44.013802 | orchestrator | 2025-05-14 14:18:44.014171 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-14 14:18:44.016695 | orchestrator | Wednesday 14 May 2025 14:18:44 +0000 (0:00:00.923) 0:03:37.773 ********* 2025-05-14 14:18:45.218300 | orchestrator | changed: [testbed-manager] 2025-05-14 14:18:45.218877 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:18:45.220274 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:18:45.220518 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:18:45.222222 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:18:45.222936 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:18:45.223637 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:18:45.224222 | orchestrator | 2025-05-14 14:18:45.224668 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-14 14:18:45.225085 | orchestrator | Wednesday 14 May 2025 14:18:45 +0000 (0:00:01.206) 0:03:38.979 ********* 2025-05-14 14:18:46.379321 | orchestrator | changed: [testbed-manager] 2025-05-14 14:18:46.379429 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:18:46.379443 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:18:46.381355 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:18:46.382604 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:18:46.384702 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:18:46.384728 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:18:46.385817 | orchestrator | 2025-05-14 14:18:46.386881 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-14 14:18:46.387756 | orchestrator | Wednesday 14 May 2025 14:18:46 +0000 (0:00:01.156) 0:03:40.135 ********* 2025-05-14 14:18:46.505227 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:18:46.537554 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:18:46.569984 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:18:46.611079 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:18:46.689322 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:18:46.689424 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:18:46.689825 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:18:46.690564 | orchestrator | 2025-05-14 14:18:46.691843 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-14 14:18:46.692239 | orchestrator | Wednesday 14 May 2025 14:18:46 +0000 (0:00:00.316) 0:03:40.452 ********* 2025-05-14 14:18:47.387367 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:47.387615 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:47.387716 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:47.388186 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:47.388627 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:47.391254 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:47.391285 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:47.391670 | orchestrator | 2025-05-14 14:18:47.391763 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-14 14:18:47.392193 | orchestrator | Wednesday 14 May 2025 14:18:47 +0000 (0:00:00.696) 0:03:41.149 ********* 2025-05-14 14:18:47.784193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:18:47.784300 | orchestrator | 2025-05-14 14:18:47.788784 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-14 14:18:47.790232 | orchestrator | Wednesday 14 May 2025 14:18:47 +0000 (0:00:00.392) 0:03:41.542 ********* 2025-05-14 14:18:55.437558 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:55.437918 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:18:55.439310 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:18:55.441808 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:18:55.442602 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:18:55.443658 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:18:55.444017 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:18:55.444821 | orchestrator | 2025-05-14 14:18:55.445878 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-14 14:18:55.446568 | orchestrator | Wednesday 14 May 2025 14:18:55 +0000 (0:00:07.656) 0:03:49.199 ********* 2025-05-14 14:18:56.542848 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:56.544176 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:56.544205 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:56.545182 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:56.546253 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:56.547618 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:56.547982 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:56.548835 | orchestrator | 2025-05-14 14:18:56.549222 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-14 14:18:56.549743 | orchestrator | Wednesday 14 May 2025 14:18:56 +0000 (0:00:01.106) 0:03:50.305 ********* 2025-05-14 14:18:57.540559 | orchestrator | ok: [testbed-manager] 2025-05-14 14:18:57.541404 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:18:57.541987 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:18:57.543232 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:18:57.544326 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:18:57.544796 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:18:57.546484 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:18:57.548228 | orchestrator | 2025-05-14 14:18:57.548772 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-14 14:18:57.550393 | orchestrator | Wednesday 14 May 2025 14:18:57 +0000 (0:00:00.997) 0:03:51.302 ********* 2025-05-14 14:18:57.925279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:18:57.930997 | orchestrator | 2025-05-14 14:18:57.931043 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-14 14:18:57.931057 | orchestrator | Wednesday 14 May 2025 14:18:57 +0000 (0:00:00.383) 0:03:51.686 ********* 2025-05-14 14:19:06.321383 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:06.322732 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:06.324076 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:06.325114 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:06.325716 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:06.326417 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:06.327066 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:06.327743 | orchestrator | 2025-05-14 14:19:06.328298 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-14 14:19:06.329363 | orchestrator | Wednesday 14 May 2025 14:19:06 +0000 (0:00:08.396) 0:04:00.083 ********* 2025-05-14 14:19:07.089711 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:07.089930 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:07.090892 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:07.091122 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:07.091669 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:07.092106 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:07.092573 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:07.093062 | orchestrator | 2025-05-14 14:19:07.096543 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-14 14:19:07.097684 | orchestrator | Wednesday 14 May 2025 14:19:07 +0000 (0:00:00.770) 0:04:00.853 ********* 2025-05-14 14:19:08.218272 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:08.218484 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:08.219051 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:08.219821 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:08.220684 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:08.223053 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:08.223688 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:08.224841 | orchestrator | 2025-05-14 14:19:08.225805 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-14 14:19:08.226535 | orchestrator | Wednesday 14 May 2025 14:19:08 +0000 (0:00:01.125) 0:04:01.978 ********* 2025-05-14 14:19:10.216979 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:10.217132 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:10.217297 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:10.218734 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:10.221025 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:10.221431 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:10.221984 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:10.222463 | orchestrator | 2025-05-14 14:19:10.223659 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-14 14:19:10.224003 | orchestrator | Wednesday 14 May 2025 14:19:10 +0000 (0:00:01.993) 0:04:03.972 ********* 2025-05-14 14:19:10.318477 | orchestrator | ok: [testbed-manager] 2025-05-14 14:19:10.349660 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:19:10.381584 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:19:10.417818 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:19:10.479309 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:19:10.479444 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:19:10.479595 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:19:10.480137 | orchestrator | 2025-05-14 14:19:10.480246 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-14 14:19:10.480837 | orchestrator | Wednesday 14 May 2025 14:19:10 +0000 (0:00:00.272) 0:04:04.244 ********* 2025-05-14 14:19:10.562525 | orchestrator | ok: [testbed-manager] 2025-05-14 14:19:10.592930 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:19:10.664891 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:19:10.698254 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:19:10.781296 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:19:10.781947 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:19:10.782680 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:19:10.782961 | orchestrator | 2025-05-14 14:19:10.783494 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-14 14:19:10.783973 | orchestrator | Wednesday 14 May 2025 14:19:10 +0000 (0:00:00.300) 0:04:04.545 ********* 2025-05-14 14:19:10.882455 | orchestrator | ok: [testbed-manager] 2025-05-14 14:19:10.914677 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:19:10.947114 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:19:10.992198 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:19:11.059914 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:19:11.060940 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:19:11.061817 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:19:11.062122 | orchestrator | 2025-05-14 14:19:11.062752 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-14 14:19:11.063206 | orchestrator | Wednesday 14 May 2025 14:19:11 +0000 (0:00:00.277) 0:04:04.823 ********* 2025-05-14 14:19:16.793552 | orchestrator | ok: [testbed-manager] 2025-05-14 14:19:16.793681 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:19:16.794223 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:19:16.794799 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:19:16.795942 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:19:16.796236 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:19:16.796879 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:19:16.797571 | orchestrator | 2025-05-14 14:19:16.798189 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-14 14:19:16.798771 | orchestrator | Wednesday 14 May 2025 14:19:16 +0000 (0:00:05.732) 0:04:10.555 ********* 2025-05-14 14:19:17.260291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:19:17.260400 | orchestrator | 2025-05-14 14:19:17.260477 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-14 14:19:17.260569 | orchestrator | Wednesday 14 May 2025 14:19:17 +0000 (0:00:00.466) 0:04:11.022 ********* 2025-05-14 14:19:17.302471 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.336015 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-14 14:19:17.413768 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:19:17.413959 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.414399 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-14 14:19:17.414993 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.415363 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-14 14:19:17.456361 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:19:17.456470 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.456574 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-14 14:19:17.497887 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:19:17.550445 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:19:17.551275 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.552152 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-14 14:19:17.552892 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.553651 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-14 14:19:17.640571 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:19:17.641137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:19:17.641890 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-14 14:19:17.642826 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-14 14:19:17.643258 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:19:17.645011 | orchestrator | 2025-05-14 14:19:17.645036 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-14 14:19:17.645049 | orchestrator | Wednesday 14 May 2025 14:19:17 +0000 (0:00:00.382) 0:04:11.404 ********* 2025-05-14 14:19:18.023240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:19:18.023800 | orchestrator | 2025-05-14 14:19:18.024534 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-14 14:19:18.027678 | orchestrator | Wednesday 14 May 2025 14:19:18 +0000 (0:00:00.381) 0:04:11.786 ********* 2025-05-14 14:19:18.097709 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-14 14:19:18.132693 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:19:18.132946 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-14 14:19:18.178946 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:19:18.179288 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-14 14:19:18.179940 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-14 14:19:18.228354 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:19:18.230684 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-14 14:19:18.261270 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:19:18.327633 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-14 14:19:18.327732 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:19:18.327745 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:19:18.327817 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-14 14:19:18.328542 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:19:18.329164 | orchestrator | 2025-05-14 14:19:18.329671 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-14 14:19:18.330166 | orchestrator | Wednesday 14 May 2025 14:19:18 +0000 (0:00:00.303) 0:04:12.089 ********* 2025-05-14 14:19:18.715989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:19:18.716477 | orchestrator | 2025-05-14 14:19:18.717562 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-14 14:19:18.720540 | orchestrator | Wednesday 14 May 2025 14:19:18 +0000 (0:00:00.388) 0:04:12.478 ********* 2025-05-14 14:19:51.980776 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:51.980971 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:51.980991 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:51.982198 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:51.983320 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:51.984504 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:51.985535 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:51.986459 | orchestrator | 2025-05-14 14:19:51.987090 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-14 14:19:51.988048 | orchestrator | Wednesday 14 May 2025 14:19:51 +0000 (0:00:33.257) 0:04:45.736 ********* 2025-05-14 14:19:59.962762 | orchestrator | changed: [testbed-manager] 2025-05-14 14:19:59.964230 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:19:59.965926 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:19:59.966931 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:19:59.967406 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:19:59.968032 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:19:59.968786 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:19:59.969632 | orchestrator | 2025-05-14 14:19:59.970492 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-14 14:19:59.971162 | orchestrator | Wednesday 14 May 2025 14:19:59 +0000 (0:00:07.988) 0:04:53.725 ********* 2025-05-14 14:20:06.987734 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:06.988498 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:06.989383 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:06.989689 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:06.990294 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:06.990819 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:06.991975 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:06.992220 | orchestrator | 2025-05-14 14:20:06.993704 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-14 14:20:06.994171 | orchestrator | Wednesday 14 May 2025 14:20:06 +0000 (0:00:07.020) 0:05:00.745 ********* 2025-05-14 14:20:08.522719 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:08.522946 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:08.524105 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:08.524537 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:08.525008 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:08.525617 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:08.526147 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:08.526583 | orchestrator | 2025-05-14 14:20:08.527154 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-14 14:20:08.527549 | orchestrator | Wednesday 14 May 2025 14:20:08 +0000 (0:00:01.537) 0:05:02.283 ********* 2025-05-14 14:20:13.921547 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:13.922933 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:13.922971 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:13.922992 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:13.923890 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:13.925387 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:13.925794 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:13.926317 | orchestrator | 2025-05-14 14:20:13.927480 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-14 14:20:13.928674 | orchestrator | Wednesday 14 May 2025 14:20:13 +0000 (0:00:05.398) 0:05:07.682 ********* 2025-05-14 14:20:14.391465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:20:14.391644 | orchestrator | 2025-05-14 14:20:14.392665 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-14 14:20:14.396140 | orchestrator | Wednesday 14 May 2025 14:20:14 +0000 (0:00:00.471) 0:05:08.154 ********* 2025-05-14 14:20:15.129806 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:15.132425 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:15.132465 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:15.132539 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:15.132554 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:15.134152 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:15.134174 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:15.134186 | orchestrator | 2025-05-14 14:20:15.134199 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-14 14:20:15.134212 | orchestrator | Wednesday 14 May 2025 14:20:15 +0000 (0:00:00.734) 0:05:08.888 ********* 2025-05-14 14:20:16.721580 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:16.721743 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:16.724270 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:16.724302 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:16.724314 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:16.724324 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:16.724335 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:16.725103 | orchestrator | 2025-05-14 14:20:16.725130 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-14 14:20:16.725571 | orchestrator | Wednesday 14 May 2025 14:20:16 +0000 (0:00:01.595) 0:05:10.483 ********* 2025-05-14 14:20:17.448363 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:17.448470 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:17.448484 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:17.448559 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:17.448606 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:17.448663 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:17.448904 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:17.449225 | orchestrator | 2025-05-14 14:20:17.449429 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-14 14:20:17.449723 | orchestrator | Wednesday 14 May 2025 14:20:17 +0000 (0:00:00.727) 0:05:11.211 ********* 2025-05-14 14:20:17.523595 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:17.558783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:17.599217 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:17.629987 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:17.661547 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:17.722112 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:17.722209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:17.725613 | orchestrator | 2025-05-14 14:20:17.725644 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-14 14:20:17.725657 | orchestrator | Wednesday 14 May 2025 14:20:17 +0000 (0:00:00.274) 0:05:11.485 ********* 2025-05-14 14:20:17.814548 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:17.850409 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:17.878728 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:17.912377 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:18.103134 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:18.103256 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:18.103271 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:18.103283 | orchestrator | 2025-05-14 14:20:18.103740 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-14 14:20:18.104812 | orchestrator | Wednesday 14 May 2025 14:20:18 +0000 (0:00:00.376) 0:05:11.861 ********* 2025-05-14 14:20:18.203606 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:18.235107 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:18.288678 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:18.320039 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:18.379941 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:18.381492 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:18.382306 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:18.383910 | orchestrator | 2025-05-14 14:20:18.386518 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-14 14:20:18.387820 | orchestrator | Wednesday 14 May 2025 14:20:18 +0000 (0:00:00.281) 0:05:12.143 ********* 2025-05-14 14:20:18.448968 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:18.480880 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:18.511409 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:18.539929 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:18.572763 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:18.635738 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:18.636261 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:18.637170 | orchestrator | 2025-05-14 14:20:18.638124 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-14 14:20:18.638720 | orchestrator | Wednesday 14 May 2025 14:20:18 +0000 (0:00:00.256) 0:05:12.400 ********* 2025-05-14 14:20:18.749505 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:18.781988 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:18.822105 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:18.863365 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:18.924416 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:18.925205 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:18.925480 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:18.925934 | orchestrator | 2025-05-14 14:20:18.928882 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-14 14:20:18.929275 | orchestrator | Wednesday 14 May 2025 14:20:18 +0000 (0:00:00.288) 0:05:12.688 ********* 2025-05-14 14:20:18.997646 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:19.027709 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:19.060296 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:19.095560 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:19.194847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:19.195706 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:19.196438 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:19.197247 | orchestrator | 2025-05-14 14:20:19.197783 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-14 14:20:19.198447 | orchestrator | Wednesday 14 May 2025 14:20:19 +0000 (0:00:00.269) 0:05:12.958 ********* 2025-05-14 14:20:19.279880 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:19.313368 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:19.344198 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:19.373848 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:19.408740 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:19.459329 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:19.459573 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:19.463643 | orchestrator | 2025-05-14 14:20:19.465287 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-14 14:20:19.466138 | orchestrator | Wednesday 14 May 2025 14:20:19 +0000 (0:00:00.264) 0:05:13.223 ********* 2025-05-14 14:20:20.006340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:20:20.007553 | orchestrator | 2025-05-14 14:20:20.008323 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-14 14:20:20.009299 | orchestrator | Wednesday 14 May 2025 14:20:20 +0000 (0:00:00.545) 0:05:13.769 ********* 2025-05-14 14:20:20.862467 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:20.862621 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:20.863823 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:20.865170 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:20.866239 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:20.867869 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:20.868732 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:20.869248 | orchestrator | 2025-05-14 14:20:20.869677 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-14 14:20:20.870225 | orchestrator | Wednesday 14 May 2025 14:20:20 +0000 (0:00:00.856) 0:05:14.625 ********* 2025-05-14 14:20:23.555502 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:23.556247 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:20:23.557017 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:20:23.558006 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:20:23.562430 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:20:23.562458 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:20:23.562464 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:20:23.562469 | orchestrator | 2025-05-14 14:20:23.562478 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-14 14:20:23.562981 | orchestrator | Wednesday 14 May 2025 14:20:23 +0000 (0:00:02.693) 0:05:17.318 ********* 2025-05-14 14:20:23.628641 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-14 14:20:23.630362 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-14 14:20:23.700003 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-14 14:20:23.700841 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-14 14:20:23.701774 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-14 14:20:23.776559 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:20:23.776801 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-14 14:20:23.778137 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-14 14:20:23.779235 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-14 14:20:23.852600 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-14 14:20:23.853862 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:23.853885 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-14 14:20:23.853891 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-14 14:20:23.934897 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-14 14:20:23.935216 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:23.935863 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-14 14:20:23.936186 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-14 14:20:23.937534 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-14 14:20:24.003433 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:24.005039 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-14 14:20:24.008208 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-14 14:20:24.009219 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-14 14:20:24.140621 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:24.141382 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:24.142258 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-14 14:20:24.142896 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-14 14:20:24.145330 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-14 14:20:24.148755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:24.149140 | orchestrator | 2025-05-14 14:20:24.150242 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-14 14:20:24.150856 | orchestrator | Wednesday 14 May 2025 14:20:24 +0000 (0:00:00.586) 0:05:17.905 ********* 2025-05-14 14:20:29.856295 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:29.856841 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:29.858327 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:29.860611 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:29.860707 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:29.861278 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:29.862717 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:29.863002 | orchestrator | 2025-05-14 14:20:29.863909 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-14 14:20:29.864675 | orchestrator | Wednesday 14 May 2025 14:20:29 +0000 (0:00:05.712) 0:05:23.617 ********* 2025-05-14 14:20:30.880509 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:30.880807 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:30.882089 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:30.882759 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:30.884880 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:30.884909 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:30.885248 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:30.885283 | orchestrator | 2025-05-14 14:20:30.886164 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-14 14:20:30.886190 | orchestrator | Wednesday 14 May 2025 14:20:30 +0000 (0:00:01.025) 0:05:24.642 ********* 2025-05-14 14:20:37.959280 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:37.959992 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:37.960398 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:37.962129 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:37.962160 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:37.963179 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:37.963740 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:37.964292 | orchestrator | 2025-05-14 14:20:37.964938 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-14 14:20:37.965466 | orchestrator | Wednesday 14 May 2025 14:20:37 +0000 (0:00:07.079) 0:05:31.722 ********* 2025-05-14 14:20:41.100155 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:41.100408 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:41.101715 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:41.101940 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:41.102585 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:41.103153 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:41.103796 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:41.104376 | orchestrator | 2025-05-14 14:20:41.105206 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-14 14:20:41.105909 | orchestrator | Wednesday 14 May 2025 14:20:41 +0000 (0:00:03.135) 0:05:34.858 ********* 2025-05-14 14:20:42.388185 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:42.388298 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:42.388384 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:42.390251 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:42.390738 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:42.391337 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:42.391802 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:42.392174 | orchestrator | 2025-05-14 14:20:42.392834 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-14 14:20:42.393346 | orchestrator | Wednesday 14 May 2025 14:20:42 +0000 (0:00:01.287) 0:05:36.145 ********* 2025-05-14 14:20:43.889442 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:43.889610 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:43.890562 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:43.892558 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:43.893005 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:43.895200 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:43.896256 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:43.897138 | orchestrator | 2025-05-14 14:20:43.898156 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-14 14:20:43.898569 | orchestrator | Wednesday 14 May 2025 14:20:43 +0000 (0:00:01.506) 0:05:37.651 ********* 2025-05-14 14:20:44.093829 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:20:44.159389 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:20:44.225230 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:20:44.295115 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:20:44.460141 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:20:44.460253 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:20:44.460396 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:44.462740 | orchestrator | 2025-05-14 14:20:44.463859 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-14 14:20:44.464406 | orchestrator | Wednesday 14 May 2025 14:20:44 +0000 (0:00:00.569) 0:05:38.221 ********* 2025-05-14 14:20:53.861756 | orchestrator | ok: [testbed-manager] 2025-05-14 14:20:53.861909 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:53.865574 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:53.865592 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:53.865600 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:53.866127 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:53.866881 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:53.868274 | orchestrator | 2025-05-14 14:20:53.869274 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-14 14:20:53.869694 | orchestrator | Wednesday 14 May 2025 14:20:53 +0000 (0:00:09.401) 0:05:47.622 ********* 2025-05-14 14:20:54.791850 | orchestrator | changed: [testbed-manager] 2025-05-14 14:20:54.792762 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:20:54.793127 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:20:54.793930 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:20:54.795270 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:20:54.796968 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:20:54.797421 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:20:54.798566 | orchestrator | 2025-05-14 14:20:54.798875 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-14 14:20:54.799427 | orchestrator | Wednesday 14 May 2025 14:20:54 +0000 (0:00:00.932) 0:05:48.555 ********* 2025-05-14 14:21:07.190290 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:07.190398 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:07.190409 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:07.190415 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:07.190577 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:07.190743 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:07.191296 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:07.191795 | orchestrator | 2025-05-14 14:21:07.192652 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-14 14:21:07.192884 | orchestrator | Wednesday 14 May 2025 14:21:07 +0000 (0:00:12.391) 0:06:00.946 ********* 2025-05-14 14:21:20.053824 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:20.053987 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:20.054194 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:20.054216 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:20.055151 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:20.056758 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:20.057498 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:20.058830 | orchestrator | 2025-05-14 14:21:20.059705 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-14 14:21:20.060726 | orchestrator | Wednesday 14 May 2025 14:21:20 +0000 (0:00:12.865) 0:06:13.812 ********* 2025-05-14 14:21:20.460340 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-14 14:21:21.224541 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-14 14:21:21.227220 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-14 14:21:21.227789 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-14 14:21:21.228076 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-14 14:21:21.228507 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-14 14:21:21.228828 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-14 14:21:21.229291 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-14 14:21:21.229814 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-14 14:21:21.230146 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-14 14:21:21.230372 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-14 14:21:21.230959 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-14 14:21:21.231530 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-14 14:21:21.231737 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-14 14:21:21.231940 | orchestrator | 2025-05-14 14:21:21.232405 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-14 14:21:21.232790 | orchestrator | Wednesday 14 May 2025 14:21:21 +0000 (0:00:01.174) 0:06:14.986 ********* 2025-05-14 14:21:21.359555 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:21.436940 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:21.500095 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:21.562981 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:21.627830 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:21.750761 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:21.751590 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:21.752211 | orchestrator | 2025-05-14 14:21:21.752947 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-14 14:21:21.758648 | orchestrator | Wednesday 14 May 2025 14:21:21 +0000 (0:00:00.526) 0:06:15.513 ********* 2025-05-14 14:21:25.152628 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:25.152818 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:25.153715 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:25.156714 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:25.156755 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:25.156764 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:25.157500 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:25.158404 | orchestrator | 2025-05-14 14:21:25.159083 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-14 14:21:25.159578 | orchestrator | Wednesday 14 May 2025 14:21:25 +0000 (0:00:03.400) 0:06:18.913 ********* 2025-05-14 14:21:25.276705 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:25.337095 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:25.400538 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:25.624478 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:25.689344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:25.789940 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:25.790757 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:25.792015 | orchestrator | 2025-05-14 14:21:25.794823 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-14 14:21:25.794874 | orchestrator | Wednesday 14 May 2025 14:21:25 +0000 (0:00:00.638) 0:06:19.552 ********* 2025-05-14 14:21:25.875057 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-14 14:21:25.875156 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-14 14:21:25.941096 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:25.942744 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-14 14:21:25.943171 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-14 14:21:26.008249 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:26.008997 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-14 14:21:26.009210 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-14 14:21:26.083909 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:26.084094 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-14 14:21:26.085120 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-14 14:21:26.145973 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:26.146148 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-14 14:21:26.146164 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-14 14:21:26.214587 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:26.214751 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-14 14:21:26.215399 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-14 14:21:26.322558 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:26.322963 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-14 14:21:26.324152 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-14 14:21:26.325664 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:26.325880 | orchestrator | 2025-05-14 14:21:26.327185 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-14 14:21:26.327611 | orchestrator | Wednesday 14 May 2025 14:21:26 +0000 (0:00:00.534) 0:06:20.086 ********* 2025-05-14 14:21:26.446876 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:26.517500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:26.579060 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:26.640235 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:26.720759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:26.817638 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:26.819787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:26.819820 | orchestrator | 2025-05-14 14:21:26.819834 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-14 14:21:26.819946 | orchestrator | Wednesday 14 May 2025 14:21:26 +0000 (0:00:00.490) 0:06:20.577 ********* 2025-05-14 14:21:26.949097 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:27.022266 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:27.086518 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:27.159128 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:27.220409 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:27.314398 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:27.315671 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:27.316847 | orchestrator | 2025-05-14 14:21:27.317476 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-14 14:21:27.318246 | orchestrator | Wednesday 14 May 2025 14:21:27 +0000 (0:00:00.498) 0:06:21.075 ********* 2025-05-14 14:21:27.459248 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:27.522367 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:27.587865 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:27.647397 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:27.709610 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:27.845530 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:27.846463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:27.847484 | orchestrator | 2025-05-14 14:21:27.849813 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-14 14:21:27.850898 | orchestrator | Wednesday 14 May 2025 14:21:27 +0000 (0:00:00.533) 0:06:21.608 ********* 2025-05-14 14:21:33.983290 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:33.983405 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:33.984124 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:33.985790 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:33.986718 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:33.987264 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:33.989087 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:33.989791 | orchestrator | 2025-05-14 14:21:33.990716 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-14 14:21:33.992698 | orchestrator | Wednesday 14 May 2025 14:21:33 +0000 (0:00:06.135) 0:06:27.744 ********* 2025-05-14 14:21:34.819201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:21:34.819859 | orchestrator | 2025-05-14 14:21:34.820968 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-14 14:21:34.822267 | orchestrator | Wednesday 14 May 2025 14:21:34 +0000 (0:00:00.834) 0:06:28.579 ********* 2025-05-14 14:21:35.214796 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:35.642257 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:35.643261 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:35.643725 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:35.643991 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:35.644743 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:35.646113 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:35.646138 | orchestrator | 2025-05-14 14:21:35.646386 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-14 14:21:35.646903 | orchestrator | Wednesday 14 May 2025 14:21:35 +0000 (0:00:00.824) 0:06:29.403 ********* 2025-05-14 14:21:36.036563 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:36.479831 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:36.480172 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:36.481929 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:36.485769 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:36.486631 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:36.489462 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:36.490262 | orchestrator | 2025-05-14 14:21:36.491122 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-14 14:21:36.491799 | orchestrator | Wednesday 14 May 2025 14:21:36 +0000 (0:00:00.839) 0:06:30.243 ********* 2025-05-14 14:21:37.987212 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:37.987324 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:37.989642 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:37.990174 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:37.991013 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:37.991666 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:37.992378 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:37.992943 | orchestrator | 2025-05-14 14:21:37.993834 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-14 14:21:37.994471 | orchestrator | Wednesday 14 May 2025 14:21:37 +0000 (0:00:01.506) 0:06:31.749 ********* 2025-05-14 14:21:38.128260 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:39.394824 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:39.396274 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:39.397227 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:39.399056 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:39.399725 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:39.400699 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:39.401520 | orchestrator | 2025-05-14 14:21:39.402250 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-14 14:21:39.405607 | orchestrator | Wednesday 14 May 2025 14:21:39 +0000 (0:00:01.405) 0:06:33.154 ********* 2025-05-14 14:21:40.749225 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:40.750922 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:40.751285 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:40.752407 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:40.753474 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:40.754170 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:40.754960 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:40.755664 | orchestrator | 2025-05-14 14:21:40.756472 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-14 14:21:40.756924 | orchestrator | Wednesday 14 May 2025 14:21:40 +0000 (0:00:01.357) 0:06:34.512 ********* 2025-05-14 14:21:42.100280 | orchestrator | changed: [testbed-manager] 2025-05-14 14:21:42.100435 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:42.102783 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:42.102808 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:42.103071 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:42.104605 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:42.104803 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:42.105152 | orchestrator | 2025-05-14 14:21:42.105992 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-14 14:21:42.106677 | orchestrator | Wednesday 14 May 2025 14:21:42 +0000 (0:00:01.349) 0:06:35.861 ********* 2025-05-14 14:21:43.121159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:21:43.122937 | orchestrator | 2025-05-14 14:21:43.123542 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-14 14:21:43.124714 | orchestrator | Wednesday 14 May 2025 14:21:43 +0000 (0:00:01.021) 0:06:36.882 ********* 2025-05-14 14:21:44.485206 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:44.485443 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:44.485969 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:44.486612 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:44.487358 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:44.487908 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:44.488249 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:44.488803 | orchestrator | 2025-05-14 14:21:44.489317 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-14 14:21:44.489622 | orchestrator | Wednesday 14 May 2025 14:21:44 +0000 (0:00:01.360) 0:06:38.243 ********* 2025-05-14 14:21:45.650388 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:45.650513 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:45.650584 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:45.651165 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:45.655125 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:45.655608 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:45.655902 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:45.656467 | orchestrator | 2025-05-14 14:21:45.656947 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-14 14:21:45.657458 | orchestrator | Wednesday 14 May 2025 14:21:45 +0000 (0:00:01.167) 0:06:39.411 ********* 2025-05-14 14:21:46.747985 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:46.748566 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:46.750209 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:46.751250 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:46.752264 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:46.752844 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:46.753459 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:46.754518 | orchestrator | 2025-05-14 14:21:46.755030 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-14 14:21:46.755584 | orchestrator | Wednesday 14 May 2025 14:21:46 +0000 (0:00:01.099) 0:06:40.511 ********* 2025-05-14 14:21:48.044778 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:48.045105 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:48.046612 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:48.047225 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:48.048555 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:48.049459 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:48.051864 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:48.052965 | orchestrator | 2025-05-14 14:21:48.054093 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-14 14:21:48.054671 | orchestrator | Wednesday 14 May 2025 14:21:48 +0000 (0:00:01.295) 0:06:41.806 ********* 2025-05-14 14:21:49.198772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:21:49.198882 | orchestrator | 2025-05-14 14:21:49.199605 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.199900 | orchestrator | Wednesday 14 May 2025 14:21:48 +0000 (0:00:00.865) 0:06:42.671 ********* 2025-05-14 14:21:49.203235 | orchestrator | 2025-05-14 14:21:49.204199 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.204227 | orchestrator | Wednesday 14 May 2025 14:21:48 +0000 (0:00:00.042) 0:06:42.714 ********* 2025-05-14 14:21:49.204766 | orchestrator | 2025-05-14 14:21:49.205331 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.205837 | orchestrator | Wednesday 14 May 2025 14:21:48 +0000 (0:00:00.036) 0:06:42.751 ********* 2025-05-14 14:21:49.206147 | orchestrator | 2025-05-14 14:21:49.206688 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.207142 | orchestrator | Wednesday 14 May 2025 14:21:49 +0000 (0:00:00.036) 0:06:42.787 ********* 2025-05-14 14:21:49.207558 | orchestrator | 2025-05-14 14:21:49.208100 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.208585 | orchestrator | Wednesday 14 May 2025 14:21:49 +0000 (0:00:00.046) 0:06:42.834 ********* 2025-05-14 14:21:49.208784 | orchestrator | 2025-05-14 14:21:49.209288 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.209720 | orchestrator | Wednesday 14 May 2025 14:21:49 +0000 (0:00:00.036) 0:06:42.871 ********* 2025-05-14 14:21:49.210120 | orchestrator | 2025-05-14 14:21:49.210654 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 14:21:49.211016 | orchestrator | Wednesday 14 May 2025 14:21:49 +0000 (0:00:00.036) 0:06:42.908 ********* 2025-05-14 14:21:49.211456 | orchestrator | 2025-05-14 14:21:49.211790 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 14:21:49.212204 | orchestrator | Wednesday 14 May 2025 14:21:49 +0000 (0:00:00.053) 0:06:42.961 ********* 2025-05-14 14:21:50.301589 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:50.301940 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:50.302606 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:50.303659 | orchestrator | 2025-05-14 14:21:50.303871 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-14 14:21:50.304490 | orchestrator | Wednesday 14 May 2025 14:21:50 +0000 (0:00:01.103) 0:06:44.064 ********* 2025-05-14 14:21:51.788376 | orchestrator | changed: [testbed-manager] 2025-05-14 14:21:51.788569 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:51.790975 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:51.791401 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:51.791513 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:51.792613 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:51.793070 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:51.794231 | orchestrator | 2025-05-14 14:21:51.794645 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-14 14:21:51.794775 | orchestrator | Wednesday 14 May 2025 14:21:51 +0000 (0:00:01.484) 0:06:45.548 ********* 2025-05-14 14:21:52.915574 | orchestrator | changed: [testbed-manager] 2025-05-14 14:21:52.915733 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:52.916218 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:52.916662 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:52.919868 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:52.919895 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:52.919906 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:52.920556 | orchestrator | 2025-05-14 14:21:52.921758 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-14 14:21:52.922695 | orchestrator | Wednesday 14 May 2025 14:21:52 +0000 (0:00:01.127) 0:06:46.675 ********* 2025-05-14 14:21:53.037243 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:55.057359 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:55.057468 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:55.057482 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:55.057511 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:55.057523 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:55.058369 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:55.059612 | orchestrator | 2025-05-14 14:21:55.060082 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-14 14:21:55.060326 | orchestrator | Wednesday 14 May 2025 14:21:55 +0000 (0:00:02.135) 0:06:48.811 ********* 2025-05-14 14:21:55.151496 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:55.153129 | orchestrator | 2025-05-14 14:21:55.153389 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-14 14:21:55.153417 | orchestrator | Wednesday 14 May 2025 14:21:55 +0000 (0:00:00.099) 0:06:48.911 ********* 2025-05-14 14:21:56.169998 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:56.170260 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:21:56.170868 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:21:56.172352 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:21:56.173212 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:21:56.173939 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:21:56.175289 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:21:56.175775 | orchestrator | 2025-05-14 14:21:56.176393 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-14 14:21:56.177200 | orchestrator | Wednesday 14 May 2025 14:21:56 +0000 (0:00:01.021) 0:06:49.932 ********* 2025-05-14 14:21:56.304325 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:21:56.370982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:21:56.430119 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:21:56.490255 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:21:56.700030 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:21:56.823607 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:21:56.823726 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:21:56.823822 | orchestrator | 2025-05-14 14:21:56.825460 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-14 14:21:56.825487 | orchestrator | Wednesday 14 May 2025 14:21:56 +0000 (0:00:00.653) 0:06:50.586 ********* 2025-05-14 14:21:57.674266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:21:57.674509 | orchestrator | 2025-05-14 14:21:57.676211 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-14 14:21:57.677363 | orchestrator | Wednesday 14 May 2025 14:21:57 +0000 (0:00:00.848) 0:06:51.435 ********* 2025-05-14 14:21:58.496103 | orchestrator | ok: [testbed-manager] 2025-05-14 14:21:58.497202 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:21:58.497797 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:21:58.498804 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:21:58.499614 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:21:58.500102 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:21:58.500801 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:21:58.501228 | orchestrator | 2025-05-14 14:21:58.501876 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-14 14:21:58.502354 | orchestrator | Wednesday 14 May 2025 14:21:58 +0000 (0:00:00.821) 0:06:52.256 ********* 2025-05-14 14:22:01.139006 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-14 14:22:01.139204 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-14 14:22:01.139223 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-14 14:22:01.139236 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-14 14:22:01.139247 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-14 14:22:01.140641 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-14 14:22:01.140871 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-14 14:22:01.142137 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-14 14:22:01.142738 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-14 14:22:01.143413 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-14 14:22:01.143934 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-14 14:22:01.144569 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-14 14:22:01.145166 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-14 14:22:01.145843 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-14 14:22:01.146687 | orchestrator | 2025-05-14 14:22:01.147031 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-14 14:22:01.147548 | orchestrator | Wednesday 14 May 2025 14:22:01 +0000 (0:00:02.628) 0:06:54.885 ********* 2025-05-14 14:22:01.264165 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:01.323187 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:01.407272 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:01.464837 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:01.528371 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:01.626191 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:01.627304 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:01.630900 | orchestrator | 2025-05-14 14:22:01.630937 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-14 14:22:01.630951 | orchestrator | Wednesday 14 May 2025 14:22:01 +0000 (0:00:00.502) 0:06:55.388 ********* 2025-05-14 14:22:02.398346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:22:02.398499 | orchestrator | 2025-05-14 14:22:02.399704 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-14 14:22:02.400107 | orchestrator | Wednesday 14 May 2025 14:22:02 +0000 (0:00:00.769) 0:06:56.158 ********* 2025-05-14 14:22:03.284134 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:03.284244 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:03.284511 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:03.285421 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:03.287083 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:03.287560 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:03.287581 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:03.289297 | orchestrator | 2025-05-14 14:22:03.290480 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-14 14:22:03.291130 | orchestrator | Wednesday 14 May 2025 14:22:03 +0000 (0:00:00.885) 0:06:57.043 ********* 2025-05-14 14:22:03.697169 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:03.761664 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:04.322462 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:04.323225 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:04.324118 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:04.325225 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:04.325692 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:04.326429 | orchestrator | 2025-05-14 14:22:04.327680 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-14 14:22:04.328016 | orchestrator | Wednesday 14 May 2025 14:22:04 +0000 (0:00:01.040) 0:06:58.084 ********* 2025-05-14 14:22:04.452615 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:04.516908 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:04.585697 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:04.647816 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:04.710427 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:04.807641 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:04.808188 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:04.809106 | orchestrator | 2025-05-14 14:22:04.810151 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-14 14:22:04.810540 | orchestrator | Wednesday 14 May 2025 14:22:04 +0000 (0:00:00.487) 0:06:58.571 ********* 2025-05-14 14:22:06.183727 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:06.184442 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:06.184977 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:06.186257 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:06.187018 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:06.187534 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:06.188533 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:06.189650 | orchestrator | 2025-05-14 14:22:06.190257 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-14 14:22:06.190697 | orchestrator | Wednesday 14 May 2025 14:22:06 +0000 (0:00:01.373) 0:06:59.945 ********* 2025-05-14 14:22:06.315214 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:06.378015 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:06.439262 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:06.506910 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:06.568623 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:06.673769 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:06.673966 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:06.674510 | orchestrator | 2025-05-14 14:22:06.674962 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-14 14:22:06.675662 | orchestrator | Wednesday 14 May 2025 14:22:06 +0000 (0:00:00.490) 0:07:00.435 ********* 2025-05-14 14:22:08.703553 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:08.704541 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:08.706833 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:08.708880 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:08.709505 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:08.710582 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:08.711422 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:08.711944 | orchestrator | 2025-05-14 14:22:08.712621 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-14 14:22:08.714703 | orchestrator | Wednesday 14 May 2025 14:22:08 +0000 (0:00:02.029) 0:07:02.465 ********* 2025-05-14 14:22:10.108466 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:10.108572 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:10.108645 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:10.108894 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:10.109205 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:10.109625 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:10.109825 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:10.110513 | orchestrator | 2025-05-14 14:22:10.110725 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-14 14:22:10.112018 | orchestrator | Wednesday 14 May 2025 14:22:10 +0000 (0:00:01.404) 0:07:03.869 ********* 2025-05-14 14:22:11.915580 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:11.915698 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:11.917839 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:11.918606 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:11.919965 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:11.921876 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:11.923143 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:11.924243 | orchestrator | 2025-05-14 14:22:11.925225 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-14 14:22:11.925639 | orchestrator | Wednesday 14 May 2025 14:22:11 +0000 (0:00:01.807) 0:07:05.676 ********* 2025-05-14 14:22:13.872723 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:13.874608 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:13.875241 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:13.875294 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:13.875853 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:13.876867 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:13.877566 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:13.878173 | orchestrator | 2025-05-14 14:22:13.878525 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 14:22:13.879151 | orchestrator | Wednesday 14 May 2025 14:22:13 +0000 (0:00:01.952) 0:07:07.629 ********* 2025-05-14 14:22:14.720658 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:14.836491 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:15.267480 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:15.268210 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:15.270268 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:15.272093 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:15.276083 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:15.277178 | orchestrator | 2025-05-14 14:22:15.278530 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 14:22:15.279102 | orchestrator | Wednesday 14 May 2025 14:22:15 +0000 (0:00:01.399) 0:07:09.028 ********* 2025-05-14 14:22:15.433799 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:15.512098 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:15.597102 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:15.657971 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:15.724088 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:16.147870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:16.148110 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:16.148599 | orchestrator | 2025-05-14 14:22:16.151572 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-14 14:22:16.152538 | orchestrator | Wednesday 14 May 2025 14:22:16 +0000 (0:00:00.881) 0:07:09.910 ********* 2025-05-14 14:22:16.298471 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:16.365783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:16.430204 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:16.504094 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:16.563169 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:16.661151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:16.662551 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:16.663180 | orchestrator | 2025-05-14 14:22:16.664093 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-14 14:22:16.665523 | orchestrator | Wednesday 14 May 2025 14:22:16 +0000 (0:00:00.512) 0:07:10.423 ********* 2025-05-14 14:22:16.795719 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:16.860001 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:16.926769 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:16.989686 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:17.054603 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:17.162828 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:17.165155 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:17.166915 | orchestrator | 2025-05-14 14:22:17.168287 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-14 14:22:17.168682 | orchestrator | Wednesday 14 May 2025 14:22:17 +0000 (0:00:00.503) 0:07:10.926 ********* 2025-05-14 14:22:17.289773 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:17.358321 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:17.600106 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:17.664282 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:17.755225 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:17.879524 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:17.879623 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:17.880134 | orchestrator | 2025-05-14 14:22:17.880458 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-14 14:22:17.880925 | orchestrator | Wednesday 14 May 2025 14:22:17 +0000 (0:00:00.714) 0:07:11.641 ********* 2025-05-14 14:22:18.040877 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:18.123434 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:18.201942 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:18.267547 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:18.345357 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:18.454140 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:18.454675 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:18.455367 | orchestrator | 2025-05-14 14:22:18.455922 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-14 14:22:18.456958 | orchestrator | Wednesday 14 May 2025 14:22:18 +0000 (0:00:00.575) 0:07:12.216 ********* 2025-05-14 14:22:24.269881 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:24.271377 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:24.271496 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:24.272978 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:24.273712 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:24.274752 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:24.276416 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:24.277555 | orchestrator | 2025-05-14 14:22:24.278113 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-14 14:22:24.278927 | orchestrator | Wednesday 14 May 2025 14:22:24 +0000 (0:00:05.814) 0:07:18.031 ********* 2025-05-14 14:22:24.483338 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:24.555465 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:24.617414 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:24.679155 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:24.807342 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:24.807434 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:24.808658 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:24.811703 | orchestrator | 2025-05-14 14:22:24.811727 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-14 14:22:24.811738 | orchestrator | Wednesday 14 May 2025 14:22:24 +0000 (0:00:00.538) 0:07:18.570 ********* 2025-05-14 14:22:25.752406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:22:25.752842 | orchestrator | 2025-05-14 14:22:25.757126 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-14 14:22:25.759559 | orchestrator | Wednesday 14 May 2025 14:22:25 +0000 (0:00:00.942) 0:07:19.513 ********* 2025-05-14 14:22:27.643945 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:27.644615 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:27.646163 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:27.647304 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:27.647934 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:27.648480 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:27.649491 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:27.649824 | orchestrator | 2025-05-14 14:22:27.650327 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-14 14:22:27.650972 | orchestrator | Wednesday 14 May 2025 14:22:27 +0000 (0:00:01.893) 0:07:21.406 ********* 2025-05-14 14:22:28.826508 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:28.827223 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:28.828533 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:28.830252 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:28.830617 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:28.831622 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:28.832662 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:28.833667 | orchestrator | 2025-05-14 14:22:28.834559 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-14 14:22:28.835476 | orchestrator | Wednesday 14 May 2025 14:22:28 +0000 (0:00:01.180) 0:07:22.587 ********* 2025-05-14 14:22:29.680257 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:29.680510 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:29.681280 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:29.683036 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:29.683097 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:29.684269 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:29.685729 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:29.685761 | orchestrator | 2025-05-14 14:22:29.686108 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-14 14:22:29.687211 | orchestrator | Wednesday 14 May 2025 14:22:29 +0000 (0:00:00.856) 0:07:23.443 ********* 2025-05-14 14:22:31.690541 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.690706 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.691473 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.694779 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.694807 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.694818 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.694830 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 14:22:31.696011 | orchestrator | 2025-05-14 14:22:31.698926 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-14 14:22:31.700076 | orchestrator | Wednesday 14 May 2025 14:22:31 +0000 (0:00:02.007) 0:07:25.451 ********* 2025-05-14 14:22:32.484798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:22:32.485022 | orchestrator | 2025-05-14 14:22:32.485134 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-14 14:22:32.485784 | orchestrator | Wednesday 14 May 2025 14:22:32 +0000 (0:00:00.796) 0:07:26.248 ********* 2025-05-14 14:22:41.596527 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:41.596808 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:41.599622 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:41.599647 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:41.599659 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:41.600413 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:41.600435 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:41.600769 | orchestrator | 2025-05-14 14:22:41.601193 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-14 14:22:41.601719 | orchestrator | Wednesday 14 May 2025 14:22:41 +0000 (0:00:09.108) 0:07:35.356 ********* 2025-05-14 14:22:43.511646 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:43.511821 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:43.513106 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:43.513383 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:43.514334 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:43.514612 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:43.516443 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:43.516863 | orchestrator | 2025-05-14 14:22:43.517378 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-14 14:22:43.517703 | orchestrator | Wednesday 14 May 2025 14:22:43 +0000 (0:00:01.917) 0:07:37.274 ********* 2025-05-14 14:22:44.781126 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:44.781255 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:44.781711 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:44.782161 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:44.783289 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:44.784987 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:44.786149 | orchestrator | 2025-05-14 14:22:44.787375 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-14 14:22:44.787477 | orchestrator | Wednesday 14 May 2025 14:22:44 +0000 (0:00:01.269) 0:07:38.543 ********* 2025-05-14 14:22:46.190371 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:46.190749 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:46.191353 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:46.192328 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:46.193521 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:46.194476 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:46.195117 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:46.195796 | orchestrator | 2025-05-14 14:22:46.196440 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-14 14:22:46.196841 | orchestrator | 2025-05-14 14:22:46.197647 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-14 14:22:46.197977 | orchestrator | Wednesday 14 May 2025 14:22:46 +0000 (0:00:01.408) 0:07:39.952 ********* 2025-05-14 14:22:46.310594 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:46.378676 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:46.438382 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:46.507815 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:46.584724 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:46.702562 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:46.703190 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:46.703404 | orchestrator | 2025-05-14 14:22:46.704143 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-14 14:22:46.705321 | orchestrator | 2025-05-14 14:22:46.705681 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-14 14:22:46.706141 | orchestrator | Wednesday 14 May 2025 14:22:46 +0000 (0:00:00.511) 0:07:40.463 ********* 2025-05-14 14:22:48.015846 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:48.016141 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:48.016483 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:48.017123 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:48.017723 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:48.020978 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:48.021424 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:48.021771 | orchestrator | 2025-05-14 14:22:48.022114 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-14 14:22:48.022422 | orchestrator | Wednesday 14 May 2025 14:22:48 +0000 (0:00:01.313) 0:07:41.777 ********* 2025-05-14 14:22:49.411979 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:49.412587 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:49.414135 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:49.414923 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:49.416116 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:49.417177 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:49.418535 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:49.418874 | orchestrator | 2025-05-14 14:22:49.419511 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-14 14:22:49.420121 | orchestrator | Wednesday 14 May 2025 14:22:49 +0000 (0:00:01.395) 0:07:43.172 ********* 2025-05-14 14:22:49.542605 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:22:49.603958 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:22:49.669117 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:22:49.885928 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:22:49.956900 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:22:50.355161 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:22:50.355461 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:22:50.356519 | orchestrator | 2025-05-14 14:22:50.357651 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-14 14:22:50.358437 | orchestrator | Wednesday 14 May 2025 14:22:50 +0000 (0:00:00.945) 0:07:44.117 ********* 2025-05-14 14:22:51.571231 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:51.572669 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:51.576462 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:51.576532 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:51.576554 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:51.576572 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:51.576588 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:51.577474 | orchestrator | 2025-05-14 14:22:51.578856 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-14 14:22:51.578980 | orchestrator | 2025-05-14 14:22:51.580253 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-14 14:22:51.580935 | orchestrator | Wednesday 14 May 2025 14:22:51 +0000 (0:00:01.217) 0:07:45.334 ********* 2025-05-14 14:22:52.357723 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:22:52.358653 | orchestrator | 2025-05-14 14:22:52.359109 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 14:22:52.364180 | orchestrator | Wednesday 14 May 2025 14:22:52 +0000 (0:00:00.785) 0:07:46.120 ********* 2025-05-14 14:22:52.819240 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:52.907545 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:53.383142 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:53.383584 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:53.385029 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:53.385269 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:53.386415 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:53.386942 | orchestrator | 2025-05-14 14:22:53.387196 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 14:22:53.388035 | orchestrator | Wednesday 14 May 2025 14:22:53 +0000 (0:00:01.025) 0:07:47.145 ********* 2025-05-14 14:22:54.498576 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:54.498740 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:54.499230 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:54.502311 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:54.502355 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:54.502368 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:54.503109 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:54.504290 | orchestrator | 2025-05-14 14:22:54.504779 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-14 14:22:54.505233 | orchestrator | Wednesday 14 May 2025 14:22:54 +0000 (0:00:01.115) 0:07:48.261 ********* 2025-05-14 14:22:55.463587 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:22:55.464440 | orchestrator | 2025-05-14 14:22:55.464852 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 14:22:55.465827 | orchestrator | Wednesday 14 May 2025 14:22:55 +0000 (0:00:00.963) 0:07:49.225 ********* 2025-05-14 14:22:55.877900 | orchestrator | ok: [testbed-manager] 2025-05-14 14:22:56.284631 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:22:56.285963 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:22:56.287234 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:22:56.287258 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:22:56.287270 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:22:56.289233 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:22:56.289682 | orchestrator | 2025-05-14 14:22:56.290218 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 14:22:56.290639 | orchestrator | Wednesday 14 May 2025 14:22:56 +0000 (0:00:00.820) 0:07:50.045 ********* 2025-05-14 14:22:56.702652 | orchestrator | changed: [testbed-manager] 2025-05-14 14:22:57.367925 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:22:57.368092 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:22:57.368173 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:22:57.369156 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:22:57.370478 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:22:57.370795 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:22:57.372174 | orchestrator | 2025-05-14 14:22:57.372613 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:22:57.372996 | orchestrator | 2025-05-14 14:22:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:22:57.373324 | orchestrator | 2025-05-14 14:22:57 | INFO  | Please wait and do not abort execution. 2025-05-14 14:22:57.374351 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-14 14:22:57.374794 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 14:22:57.375659 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 14:22:57.375936 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 14:22:57.376494 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 14:22:57.377281 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 14:22:57.377658 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 14:22:57.378162 | orchestrator | 2025-05-14 14:22:57.378588 | orchestrator | Wednesday 14 May 2025 14:22:57 +0000 (0:00:01.083) 0:07:51.129 ********* 2025-05-14 14:22:57.379112 | orchestrator | =============================================================================== 2025-05-14 14:22:57.379584 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.97s 2025-05-14 14:22:57.380122 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.20s 2025-05-14 14:22:57.380678 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.26s 2025-05-14 14:22:57.381173 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.37s 2025-05-14 14:22:57.381613 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.87s 2025-05-14 14:22:57.382606 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.39s 2025-05-14 14:22:57.383024 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.91s 2025-05-14 14:22:57.383562 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.18s 2025-05-14 14:22:57.384011 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.40s 2025-05-14 14:22:57.384444 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.11s 2025-05-14 14:22:57.384969 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.40s 2025-05-14 14:22:57.385387 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.99s 2025-05-14 14:22:57.385929 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.66s 2025-05-14 14:22:57.386274 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.08s 2025-05-14 14:22:57.387209 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.02s 2025-05-14 14:22:57.387550 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.57s 2025-05-14 14:22:57.388118 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.14s 2025-05-14 14:22:57.388682 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.81s 2025-05-14 14:22:57.390161 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.73s 2025-05-14 14:22:57.390890 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.71s 2025-05-14 14:22:57.979564 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-14 14:22:57.979669 | orchestrator | + osism apply network 2025-05-14 14:22:59.751004 | orchestrator | 2025-05-14 14:22:59 | INFO  | Task 277c7a90-1a07-4254-8c1c-e818bb73331b (network) was prepared for execution. 2025-05-14 14:22:59.751158 | orchestrator | 2025-05-14 14:22:59 | INFO  | It takes a moment until task 277c7a90-1a07-4254-8c1c-e818bb73331b (network) has been started and output is visible here. 2025-05-14 14:23:03.018495 | orchestrator | 2025-05-14 14:23:03.018680 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-14 14:23:03.022229 | orchestrator | 2025-05-14 14:23:03.022507 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-14 14:23:03.022529 | orchestrator | Wednesday 14 May 2025 14:23:03 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-05-14 14:23:03.162563 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:03.236337 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:03.308677 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:03.383894 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:03.454712 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:03.699493 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:03.699648 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:03.700127 | orchestrator | 2025-05-14 14:23:03.700533 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-14 14:23:03.701040 | orchestrator | Wednesday 14 May 2025 14:23:03 +0000 (0:00:00.680) 0:00:00.876 ********* 2025-05-14 14:23:04.865339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:23:04.865913 | orchestrator | 2025-05-14 14:23:04.867128 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-14 14:23:04.868684 | orchestrator | Wednesday 14 May 2025 14:23:04 +0000 (0:00:01.164) 0:00:02.041 ********* 2025-05-14 14:23:06.993916 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:06.994628 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:06.995758 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:06.996292 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:06.997245 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:06.997703 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:06.999325 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:07.000875 | orchestrator | 2025-05-14 14:23:07.001790 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-14 14:23:07.002363 | orchestrator | Wednesday 14 May 2025 14:23:06 +0000 (0:00:02.126) 0:00:04.168 ********* 2025-05-14 14:23:08.640487 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:08.641092 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:08.641786 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:08.642771 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:08.643137 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:08.644251 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:08.644957 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:08.645986 | orchestrator | 2025-05-14 14:23:08.646304 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-14 14:23:08.647203 | orchestrator | Wednesday 14 May 2025 14:23:08 +0000 (0:00:01.646) 0:00:05.815 ********* 2025-05-14 14:23:09.121280 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-14 14:23:09.721934 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-14 14:23:09.722161 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-14 14:23:09.722257 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-14 14:23:09.723034 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-14 14:23:09.723553 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-14 14:23:09.725722 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-14 14:23:09.727737 | orchestrator | 2025-05-14 14:23:09.728170 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-14 14:23:09.728628 | orchestrator | Wednesday 14 May 2025 14:23:09 +0000 (0:00:01.083) 0:00:06.898 ********* 2025-05-14 14:23:11.502284 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 14:23:11.502391 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:23:11.502477 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:23:11.502897 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 14:23:11.502916 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:23:11.503741 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 14:23:11.503963 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 14:23:11.504433 | orchestrator | 2025-05-14 14:23:11.506188 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-14 14:23:11.506836 | orchestrator | Wednesday 14 May 2025 14:23:11 +0000 (0:00:01.782) 0:00:08.681 ********* 2025-05-14 14:23:13.100216 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:13.100327 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:23:13.100677 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:23:13.103779 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:23:13.103804 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:23:13.103816 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:23:13.103874 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:23:13.104482 | orchestrator | 2025-05-14 14:23:13.104955 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-14 14:23:13.105459 | orchestrator | Wednesday 14 May 2025 14:23:13 +0000 (0:00:01.593) 0:00:10.274 ********* 2025-05-14 14:23:13.567934 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:23:14.076520 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:23:14.077406 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 14:23:14.078958 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 14:23:14.079120 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:23:14.080642 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 14:23:14.081532 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 14:23:14.082132 | orchestrator | 2025-05-14 14:23:14.082934 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-14 14:23:14.083439 | orchestrator | Wednesday 14 May 2025 14:23:14 +0000 (0:00:00.981) 0:00:11.255 ********* 2025-05-14 14:23:14.516444 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:14.606278 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:15.205805 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:15.205997 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:15.206646 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:15.207330 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:15.207949 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:15.208582 | orchestrator | 2025-05-14 14:23:15.209158 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-14 14:23:15.209489 | orchestrator | Wednesday 14 May 2025 14:23:15 +0000 (0:00:01.126) 0:00:12.382 ********* 2025-05-14 14:23:15.372537 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:23:15.447645 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:23:15.527810 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:23:15.601757 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:23:15.691119 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:23:15.979273 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:23:15.979397 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:23:15.979513 | orchestrator | 2025-05-14 14:23:15.979537 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-14 14:23:15.979661 | orchestrator | Wednesday 14 May 2025 14:23:15 +0000 (0:00:00.774) 0:00:13.156 ********* 2025-05-14 14:23:17.855174 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:17.856539 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:17.858408 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:17.859727 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:17.862142 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:17.862172 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:17.862184 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:17.863553 | orchestrator | 2025-05-14 14:23:17.863578 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-14 14:23:17.863591 | orchestrator | Wednesday 14 May 2025 14:23:17 +0000 (0:00:01.872) 0:00:15.028 ********* 2025-05-14 14:23:18.601383 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-14 14:23:19.711371 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.712236 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.713752 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.715051 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.716361 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.717605 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.718640 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 14:23:19.719879 | orchestrator | 2025-05-14 14:23:19.720571 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-14 14:23:19.721420 | orchestrator | Wednesday 14 May 2025 14:23:19 +0000 (0:00:01.856) 0:00:16.885 ********* 2025-05-14 14:23:21.234946 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:21.236570 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:23:21.238783 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:23:21.240043 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:23:21.241726 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:23:21.242497 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:23:21.243220 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:23:21.243925 | orchestrator | 2025-05-14 14:23:21.244697 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-14 14:23:21.245264 | orchestrator | Wednesday 14 May 2025 14:23:21 +0000 (0:00:01.524) 0:00:18.410 ********* 2025-05-14 14:23:22.582071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:23:22.584167 | orchestrator | 2025-05-14 14:23:22.584535 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-14 14:23:22.585763 | orchestrator | Wednesday 14 May 2025 14:23:22 +0000 (0:00:01.346) 0:00:19.757 ********* 2025-05-14 14:23:23.105825 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:23.536083 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:23.536300 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:23.537426 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:23.538870 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:23.539131 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:23.541600 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:23.542232 | orchestrator | 2025-05-14 14:23:23.542908 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-14 14:23:23.544537 | orchestrator | Wednesday 14 May 2025 14:23:23 +0000 (0:00:00.955) 0:00:20.712 ********* 2025-05-14 14:23:23.704187 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:23.789895 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:24.078676 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:24.172326 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:24.264431 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:24.415808 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:24.415903 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:24.416958 | orchestrator | 2025-05-14 14:23:24.417460 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-14 14:23:24.418156 | orchestrator | Wednesday 14 May 2025 14:23:24 +0000 (0:00:00.875) 0:00:21.587 ********* 2025-05-14 14:23:24.755249 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:24.858894 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:24.946717 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:24.947437 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.407318 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:25.408559 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.409206 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:25.410534 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.412103 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:25.413232 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.414410 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:25.415505 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.416606 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 14:23:25.417482 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 14:23:25.418312 | orchestrator | 2025-05-14 14:23:25.419182 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-14 14:23:25.419648 | orchestrator | Wednesday 14 May 2025 14:23:25 +0000 (0:00:00.997) 0:00:22.585 ********* 2025-05-14 14:23:25.729025 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:23:25.812332 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:23:25.895303 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:23:25.975044 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:23:26.056807 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:23:27.220871 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:23:27.223186 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:23:27.223234 | orchestrator | 2025-05-14 14:23:27.224145 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-14 14:23:27.225203 | orchestrator | Wednesday 14 May 2025 14:23:27 +0000 (0:00:01.808) 0:00:24.393 ********* 2025-05-14 14:23:27.377576 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:23:27.473603 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:23:27.732977 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:23:27.811651 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:23:27.890663 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:23:27.923347 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:23:27.924171 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:23:27.924639 | orchestrator | 2025-05-14 14:23:27.925428 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:23:27.925857 | orchestrator | 2025-05-14 14:23:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:23:27.926169 | orchestrator | 2025-05-14 14:23:27 | INFO  | Please wait and do not abort execution. 2025-05-14 14:23:27.926790 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.927925 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.928712 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.929289 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.930090 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.930642 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.931156 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:23:27.931738 | orchestrator | 2025-05-14 14:23:27.934742 | orchestrator | Wednesday 14 May 2025 14:23:27 +0000 (0:00:00.708) 0:00:25.103 ********* 2025-05-14 14:23:27.935060 | orchestrator | =============================================================================== 2025-05-14 14:23:27.936010 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.13s 2025-05-14 14:23:27.938487 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.87s 2025-05-14 14:23:27.939156 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.86s 2025-05-14 14:23:27.939482 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.81s 2025-05-14 14:23:27.940161 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.78s 2025-05-14 14:23:27.940351 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.65s 2025-05-14 14:23:27.941078 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-05-14 14:23:27.941626 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-05-14 14:23:27.942203 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.35s 2025-05-14 14:23:27.942605 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-05-14 14:23:27.942986 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-05-14 14:23:27.943485 | orchestrator | osism.commons.network : Create required directories --------------------- 1.08s 2025-05-14 14:23:27.944024 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.00s 2025-05-14 14:23:27.944543 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.98s 2025-05-14 14:23:27.944900 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2025-05-14 14:23:27.945416 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.88s 2025-05-14 14:23:27.945754 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.77s 2025-05-14 14:23:27.946257 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.71s 2025-05-14 14:23:27.946661 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.68s 2025-05-14 14:23:28.442974 | orchestrator | + osism apply wireguard 2025-05-14 14:23:29.829986 | orchestrator | 2025-05-14 14:23:29 | INFO  | Task 2b9feb1b-f603-43b4-812b-0717605e2df3 (wireguard) was prepared for execution. 2025-05-14 14:23:29.830235 | orchestrator | 2025-05-14 14:23:29 | INFO  | It takes a moment until task 2b9feb1b-f603-43b4-812b-0717605e2df3 (wireguard) has been started and output is visible here. 2025-05-14 14:23:32.907605 | orchestrator | 2025-05-14 14:23:32.907898 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-14 14:23:32.907924 | orchestrator | 2025-05-14 14:23:32.909407 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-14 14:23:32.909960 | orchestrator | Wednesday 14 May 2025 14:23:32 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-05-14 14:23:34.352484 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:34.353893 | orchestrator | 2025-05-14 14:23:34.353937 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-14 14:23:34.355129 | orchestrator | Wednesday 14 May 2025 14:23:34 +0000 (0:00:01.444) 0:00:01.610 ********* 2025-05-14 14:23:40.560062 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:40.560784 | orchestrator | 2025-05-14 14:23:40.561085 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-14 14:23:40.561765 | orchestrator | Wednesday 14 May 2025 14:23:40 +0000 (0:00:06.209) 0:00:07.819 ********* 2025-05-14 14:23:41.087357 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:41.088155 | orchestrator | 2025-05-14 14:23:41.089069 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-14 14:23:41.090062 | orchestrator | Wednesday 14 May 2025 14:23:41 +0000 (0:00:00.530) 0:00:08.349 ********* 2025-05-14 14:23:41.460891 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:41.461595 | orchestrator | 2025-05-14 14:23:41.462234 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-14 14:23:41.462904 | orchestrator | Wednesday 14 May 2025 14:23:41 +0000 (0:00:00.374) 0:00:08.723 ********* 2025-05-14 14:23:41.978323 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:41.978502 | orchestrator | 2025-05-14 14:23:41.980608 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-14 14:23:41.980710 | orchestrator | Wednesday 14 May 2025 14:23:41 +0000 (0:00:00.516) 0:00:09.240 ********* 2025-05-14 14:23:42.483456 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:42.483787 | orchestrator | 2025-05-14 14:23:42.485070 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-14 14:23:42.485717 | orchestrator | Wednesday 14 May 2025 14:23:42 +0000 (0:00:00.506) 0:00:09.746 ********* 2025-05-14 14:23:42.898625 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:42.899757 | orchestrator | 2025-05-14 14:23:42.899807 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-14 14:23:42.899966 | orchestrator | Wednesday 14 May 2025 14:23:42 +0000 (0:00:00.411) 0:00:10.158 ********* 2025-05-14 14:23:44.054356 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:44.054875 | orchestrator | 2025-05-14 14:23:44.055109 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-14 14:23:44.055384 | orchestrator | Wednesday 14 May 2025 14:23:44 +0000 (0:00:01.152) 0:00:11.310 ********* 2025-05-14 14:23:44.953459 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 14:23:44.953572 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:44.955939 | orchestrator | 2025-05-14 14:23:44.956113 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-14 14:23:44.957511 | orchestrator | Wednesday 14 May 2025 14:23:44 +0000 (0:00:00.903) 0:00:12.214 ********* 2025-05-14 14:23:46.644942 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:46.645071 | orchestrator | 2025-05-14 14:23:46.645647 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-14 14:23:46.646231 | orchestrator | Wednesday 14 May 2025 14:23:46 +0000 (0:00:01.693) 0:00:13.907 ********* 2025-05-14 14:23:47.545325 | orchestrator | changed: [testbed-manager] 2025-05-14 14:23:47.546843 | orchestrator | 2025-05-14 14:23:47.547420 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:23:47.547968 | orchestrator | 2025-05-14 14:23:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:23:47.548053 | orchestrator | 2025-05-14 14:23:47 | INFO  | Please wait and do not abort execution. 2025-05-14 14:23:47.549279 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:23:47.549773 | orchestrator | 2025-05-14 14:23:47.550913 | orchestrator | Wednesday 14 May 2025 14:23:47 +0000 (0:00:00.897) 0:00:14.805 ********* 2025-05-14 14:23:47.551509 | orchestrator | =============================================================================== 2025-05-14 14:23:47.552011 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.21s 2025-05-14 14:23:47.552808 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-05-14 14:23:47.553444 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-05-14 14:23:47.554098 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-05-14 14:23:47.554677 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-05-14 14:23:47.555181 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-05-14 14:23:47.555774 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-05-14 14:23:47.556271 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-14 14:23:47.556889 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-05-14 14:23:47.558093 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-05-14 14:23:47.558624 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.37s 2025-05-14 14:23:48.055292 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-14 14:23:48.093682 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-14 14:23:48.093770 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-14 14:23:48.185506 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 163 0 --:--:-- --:--:-- --:--:-- 164 2025-05-14 14:23:48.199484 | orchestrator | + osism apply --environment custom workarounds 2025-05-14 14:23:49.566635 | orchestrator | 2025-05-14 14:23:49 | INFO  | Trying to run play workarounds in environment custom 2025-05-14 14:23:49.612661 | orchestrator | 2025-05-14 14:23:49 | INFO  | Task 0a4484ba-0d56-4438-aea0-bc2c3d0f7381 (workarounds) was prepared for execution. 2025-05-14 14:23:49.612754 | orchestrator | 2025-05-14 14:23:49 | INFO  | It takes a moment until task 0a4484ba-0d56-4438-aea0-bc2c3d0f7381 (workarounds) has been started and output is visible here. 2025-05-14 14:23:52.631834 | orchestrator | 2025-05-14 14:23:52.632373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:23:52.632911 | orchestrator | 2025-05-14 14:23:52.634379 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-14 14:23:52.636223 | orchestrator | Wednesday 14 May 2025 14:23:52 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-05-14 14:23:52.797993 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-14 14:23:52.880248 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-14 14:23:52.960474 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-14 14:23:53.040834 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-14 14:23:53.122996 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-14 14:23:53.363005 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-14 14:23:53.364114 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-14 14:23:53.365391 | orchestrator | 2025-05-14 14:23:53.366805 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-14 14:23:53.367205 | orchestrator | 2025-05-14 14:23:53.368249 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 14:23:53.368989 | orchestrator | Wednesday 14 May 2025 14:23:53 +0000 (0:00:00.732) 0:00:00.870 ********* 2025-05-14 14:23:55.888793 | orchestrator | ok: [testbed-manager] 2025-05-14 14:23:55.888968 | orchestrator | 2025-05-14 14:23:55.889409 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-14 14:23:55.890399 | orchestrator | 2025-05-14 14:23:55.891016 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 14:23:55.893552 | orchestrator | Wednesday 14 May 2025 14:23:55 +0000 (0:00:02.522) 0:00:03.392 ********* 2025-05-14 14:23:57.685456 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:23:57.685889 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:23:57.686643 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:23:57.689135 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:23:57.689182 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:23:57.689194 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:23:57.690138 | orchestrator | 2025-05-14 14:23:57.691133 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-14 14:23:57.691866 | orchestrator | 2025-05-14 14:23:57.693199 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-14 14:23:57.694295 | orchestrator | Wednesday 14 May 2025 14:23:57 +0000 (0:00:01.797) 0:00:05.190 ********* 2025-05-14 14:23:59.153946 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.154477 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.154639 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.156450 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.157359 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.158318 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 14:23:59.159995 | orchestrator | 2025-05-14 14:23:59.160036 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-14 14:23:59.160050 | orchestrator | Wednesday 14 May 2025 14:23:59 +0000 (0:00:01.466) 0:00:06.656 ********* 2025-05-14 14:24:03.050533 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:03.051040 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:03.051342 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:03.051990 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:03.053588 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:03.054106 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:03.054610 | orchestrator | 2025-05-14 14:24:03.055646 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-14 14:24:03.055925 | orchestrator | Wednesday 14 May 2025 14:24:03 +0000 (0:00:03.900) 0:00:10.557 ********* 2025-05-14 14:24:03.194635 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:24:03.270483 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:24:03.342916 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:24:03.555056 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:24:03.683713 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:24:03.683872 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:24:03.684733 | orchestrator | 2025-05-14 14:24:03.686324 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-14 14:24:03.687223 | orchestrator | 2025-05-14 14:24:03.688458 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-14 14:24:03.689479 | orchestrator | Wednesday 14 May 2025 14:24:03 +0000 (0:00:00.631) 0:00:11.188 ********* 2025-05-14 14:24:05.408348 | orchestrator | changed: [testbed-manager] 2025-05-14 14:24:05.408468 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:05.408797 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:05.410323 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:05.413374 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:05.413416 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:05.413428 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:05.413817 | orchestrator | 2025-05-14 14:24:05.414394 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-14 14:24:05.414970 | orchestrator | Wednesday 14 May 2025 14:24:05 +0000 (0:00:01.725) 0:00:12.914 ********* 2025-05-14 14:24:07.061491 | orchestrator | changed: [testbed-manager] 2025-05-14 14:24:07.061689 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:07.064105 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:07.064139 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:07.064151 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:07.064578 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:07.065526 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:07.066351 | orchestrator | 2025-05-14 14:24:07.067146 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-14 14:24:07.067668 | orchestrator | Wednesday 14 May 2025 14:24:07 +0000 (0:00:01.649) 0:00:14.563 ********* 2025-05-14 14:24:08.717673 | orchestrator | ok: [testbed-manager] 2025-05-14 14:24:08.718496 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:08.720128 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:08.720666 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:08.721504 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:08.722379 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:08.723024 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:08.723524 | orchestrator | 2025-05-14 14:24:08.724404 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-14 14:24:08.725096 | orchestrator | Wednesday 14 May 2025 14:24:08 +0000 (0:00:01.659) 0:00:16.223 ********* 2025-05-14 14:24:10.501133 | orchestrator | changed: [testbed-manager] 2025-05-14 14:24:10.501396 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:10.502175 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:10.505638 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:10.505689 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:10.505701 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:10.505712 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:10.505729 | orchestrator | 2025-05-14 14:24:10.505743 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-14 14:24:10.506169 | orchestrator | Wednesday 14 May 2025 14:24:10 +0000 (0:00:01.784) 0:00:18.007 ********* 2025-05-14 14:24:10.652553 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:24:10.728458 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:24:10.828580 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:24:10.904619 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:24:11.126831 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:24:11.263202 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:24:11.263336 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:24:11.265013 | orchestrator | 2025-05-14 14:24:11.265964 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-14 14:24:11.266653 | orchestrator | 2025-05-14 14:24:11.267392 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-14 14:24:11.268849 | orchestrator | Wednesday 14 May 2025 14:24:11 +0000 (0:00:00.760) 0:00:18.768 ********* 2025-05-14 14:24:13.825086 | orchestrator | ok: [testbed-manager] 2025-05-14 14:24:13.825193 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:13.825875 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:13.826431 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:13.827167 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:13.828776 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:13.829185 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:13.829848 | orchestrator | 2025-05-14 14:24:13.830487 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:24:13.831024 | orchestrator | 2025-05-14 14:24:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:24:13.831181 | orchestrator | 2025-05-14 14:24:13 | INFO  | Please wait and do not abort execution. 2025-05-14 14:24:13.831776 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:24:13.832915 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.833107 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.833480 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.833818 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.834153 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.834484 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:13.834739 | orchestrator | 2025-05-14 14:24:13.835427 | orchestrator | Wednesday 14 May 2025 14:24:13 +0000 (0:00:02.563) 0:00:21.331 ********* 2025-05-14 14:24:13.836230 | orchestrator | =============================================================================== 2025-05-14 14:24:13.836458 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.90s 2025-05-14 14:24:13.836871 | orchestrator | Install python3-docker -------------------------------------------------- 2.56s 2025-05-14 14:24:13.837179 | orchestrator | Apply netplan configuration --------------------------------------------- 2.52s 2025-05-14 14:24:13.837396 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-05-14 14:24:13.838166 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.78s 2025-05-14 14:24:13.838195 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.73s 2025-05-14 14:24:13.838372 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2025-05-14 14:24:13.838751 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-05-14 14:24:13.839182 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-05-14 14:24:13.839466 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.76s 2025-05-14 14:24:13.840108 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-05-14 14:24:13.840502 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.63s 2025-05-14 14:24:14.358644 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-14 14:24:16.000806 | orchestrator | 2025-05-14 14:24:16 | INFO  | Task d6626eea-d4ce-4761-a431-10207a50d164 (reboot) was prepared for execution. 2025-05-14 14:24:16.000924 | orchestrator | 2025-05-14 14:24:16 | INFO  | It takes a moment until task d6626eea-d4ce-4761-a431-10207a50d164 (reboot) has been started and output is visible here. 2025-05-14 14:24:19.164587 | orchestrator | 2025-05-14 14:24:19.164778 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:19.166380 | orchestrator | 2025-05-14 14:24:19.167145 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:19.167874 | orchestrator | Wednesday 14 May 2025 14:24:19 +0000 (0:00:00.143) 0:00:00.143 ********* 2025-05-14 14:24:19.268599 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:24:19.268769 | orchestrator | 2025-05-14 14:24:19.269430 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:19.270124 | orchestrator | Wednesday 14 May 2025 14:24:19 +0000 (0:00:00.106) 0:00:00.249 ********* 2025-05-14 14:24:20.216664 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:20.217671 | orchestrator | 2025-05-14 14:24:20.218286 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:20.220113 | orchestrator | Wednesday 14 May 2025 14:24:20 +0000 (0:00:00.949) 0:00:01.198 ********* 2025-05-14 14:24:20.339014 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:24:20.339502 | orchestrator | 2025-05-14 14:24:20.342237 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:20.342925 | orchestrator | 2025-05-14 14:24:20.343787 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:20.344461 | orchestrator | Wednesday 14 May 2025 14:24:20 +0000 (0:00:00.122) 0:00:01.321 ********* 2025-05-14 14:24:20.447418 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:24:20.447554 | orchestrator | 2025-05-14 14:24:20.448665 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:20.449702 | orchestrator | Wednesday 14 May 2025 14:24:20 +0000 (0:00:00.106) 0:00:01.427 ********* 2025-05-14 14:24:21.099361 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:21.099576 | orchestrator | 2025-05-14 14:24:21.100060 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:21.100790 | orchestrator | Wednesday 14 May 2025 14:24:21 +0000 (0:00:00.653) 0:00:02.081 ********* 2025-05-14 14:24:21.216650 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:24:21.216821 | orchestrator | 2025-05-14 14:24:21.217383 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:21.219086 | orchestrator | 2025-05-14 14:24:21.219108 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:21.219939 | orchestrator | Wednesday 14 May 2025 14:24:21 +0000 (0:00:00.115) 0:00:02.197 ********* 2025-05-14 14:24:21.328346 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:24:21.330670 | orchestrator | 2025-05-14 14:24:21.330699 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:21.330713 | orchestrator | Wednesday 14 May 2025 14:24:21 +0000 (0:00:00.112) 0:00:02.309 ********* 2025-05-14 14:24:22.104836 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:22.104944 | orchestrator | 2025-05-14 14:24:22.104958 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:22.104991 | orchestrator | Wednesday 14 May 2025 14:24:22 +0000 (0:00:00.777) 0:00:03.087 ********* 2025-05-14 14:24:22.226527 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:24:22.227699 | orchestrator | 2025-05-14 14:24:22.230159 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:22.230912 | orchestrator | 2025-05-14 14:24:22.231468 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:22.231851 | orchestrator | Wednesday 14 May 2025 14:24:22 +0000 (0:00:00.118) 0:00:03.206 ********* 2025-05-14 14:24:22.325553 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:24:22.325651 | orchestrator | 2025-05-14 14:24:22.325734 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:22.326698 | orchestrator | Wednesday 14 May 2025 14:24:22 +0000 (0:00:00.100) 0:00:03.307 ********* 2025-05-14 14:24:22.984913 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:22.985100 | orchestrator | 2025-05-14 14:24:22.986398 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:22.986469 | orchestrator | Wednesday 14 May 2025 14:24:22 +0000 (0:00:00.659) 0:00:03.966 ********* 2025-05-14 14:24:23.109289 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:24:23.109422 | orchestrator | 2025-05-14 14:24:23.109524 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:23.109551 | orchestrator | 2025-05-14 14:24:23.109836 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:23.110204 | orchestrator | Wednesday 14 May 2025 14:24:23 +0000 (0:00:00.119) 0:00:04.086 ********* 2025-05-14 14:24:23.200773 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:24:23.201049 | orchestrator | 2025-05-14 14:24:23.201556 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:23.202279 | orchestrator | Wednesday 14 May 2025 14:24:23 +0000 (0:00:00.096) 0:00:04.183 ********* 2025-05-14 14:24:23.857229 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:23.857448 | orchestrator | 2025-05-14 14:24:23.858896 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:23.860125 | orchestrator | Wednesday 14 May 2025 14:24:23 +0000 (0:00:00.653) 0:00:04.836 ********* 2025-05-14 14:24:23.975069 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:24:23.975599 | orchestrator | 2025-05-14 14:24:23.978459 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 14:24:23.978490 | orchestrator | 2025-05-14 14:24:23.978504 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 14:24:23.978516 | orchestrator | Wednesday 14 May 2025 14:24:23 +0000 (0:00:00.120) 0:00:04.957 ********* 2025-05-14 14:24:24.091476 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:24:24.091581 | orchestrator | 2025-05-14 14:24:24.092114 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 14:24:24.092919 | orchestrator | Wednesday 14 May 2025 14:24:24 +0000 (0:00:00.116) 0:00:05.073 ********* 2025-05-14 14:24:24.832179 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:24.832842 | orchestrator | 2025-05-14 14:24:24.833701 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 14:24:24.834605 | orchestrator | Wednesday 14 May 2025 14:24:24 +0000 (0:00:00.737) 0:00:05.811 ********* 2025-05-14 14:24:24.867701 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:24:24.869292 | orchestrator | 2025-05-14 14:24:24.869414 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:24:24.869476 | orchestrator | 2025-05-14 14:24:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:24:24.869491 | orchestrator | 2025-05-14 14:24:24 | INFO  | Please wait and do not abort execution. 2025-05-14 14:24:24.870890 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.871482 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.871966 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.872680 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.873433 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.873954 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:24:24.874417 | orchestrator | 2025-05-14 14:24:24.874935 | orchestrator | Wednesday 14 May 2025 14:24:24 +0000 (0:00:00.038) 0:00:05.849 ********* 2025-05-14 14:24:24.875366 | orchestrator | =============================================================================== 2025-05-14 14:24:24.875965 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.43s 2025-05-14 14:24:24.876432 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2025-05-14 14:24:24.876905 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2025-05-14 14:24:25.517521 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-14 14:24:27.094602 | orchestrator | 2025-05-14 14:24:27 | INFO  | Task bc6ec63a-268f-4a4c-9f5e-a08e882aaba9 (wait-for-connection) was prepared for execution. 2025-05-14 14:24:27.094708 | orchestrator | 2025-05-14 14:24:27 | INFO  | It takes a moment until task bc6ec63a-268f-4a4c-9f5e-a08e882aaba9 (wait-for-connection) has been started and output is visible here. 2025-05-14 14:24:30.131746 | orchestrator | 2025-05-14 14:24:30.132092 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-14 14:24:30.132721 | orchestrator | 2025-05-14 14:24:30.138925 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-14 14:24:30.138955 | orchestrator | Wednesday 14 May 2025 14:24:30 +0000 (0:00:00.186) 0:00:00.186 ********* 2025-05-14 14:24:43.298535 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:43.298700 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:43.298718 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:43.298809 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:43.299406 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:43.300535 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:43.301840 | orchestrator | 2025-05-14 14:24:43.302527 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:24:43.302819 | orchestrator | 2025-05-14 14:24:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:24:43.303050 | orchestrator | 2025-05-14 14:24:43 | INFO  | Please wait and do not abort execution. 2025-05-14 14:24:43.303622 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.304128 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.304542 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.304970 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.305409 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.305874 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:24:43.306926 | orchestrator | 2025-05-14 14:24:43.307012 | orchestrator | Wednesday 14 May 2025 14:24:43 +0000 (0:00:13.166) 0:00:13.353 ********* 2025-05-14 14:24:43.307342 | orchestrator | =============================================================================== 2025-05-14 14:24:43.307925 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.17s 2025-05-14 14:24:43.772436 | orchestrator | + osism apply hddtemp 2025-05-14 14:24:45.198457 | orchestrator | 2025-05-14 14:24:45 | INFO  | Task 5941f5a3-392a-40a6-a4ab-ba45d2ef5bf9 (hddtemp) was prepared for execution. 2025-05-14 14:24:45.198555 | orchestrator | 2025-05-14 14:24:45 | INFO  | It takes a moment until task 5941f5a3-392a-40a6-a4ab-ba45d2ef5bf9 (hddtemp) has been started and output is visible here. 2025-05-14 14:24:48.296915 | orchestrator | 2025-05-14 14:24:48.297044 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-14 14:24:48.300526 | orchestrator | 2025-05-14 14:24:48.300597 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-14 14:24:48.301016 | orchestrator | Wednesday 14 May 2025 14:24:48 +0000 (0:00:00.207) 0:00:00.208 ********* 2025-05-14 14:24:48.458511 | orchestrator | ok: [testbed-manager] 2025-05-14 14:24:48.537887 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:48.607224 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:48.680326 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:48.753384 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:48.977554 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:48.978097 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:48.979542 | orchestrator | 2025-05-14 14:24:48.980050 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-14 14:24:48.983403 | orchestrator | Wednesday 14 May 2025 14:24:48 +0000 (0:00:00.683) 0:00:00.891 ********* 2025-05-14 14:24:50.105980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:24:50.106230 | orchestrator | 2025-05-14 14:24:50.109989 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-14 14:24:50.110055 | orchestrator | Wednesday 14 May 2025 14:24:50 +0000 (0:00:01.126) 0:00:02.017 ********* 2025-05-14 14:24:52.122755 | orchestrator | ok: [testbed-manager] 2025-05-14 14:24:52.126077 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:52.126116 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:52.126129 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:52.126140 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:52.126151 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:52.128281 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:52.128388 | orchestrator | 2025-05-14 14:24:52.128937 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-14 14:24:52.130349 | orchestrator | Wednesday 14 May 2025 14:24:52 +0000 (0:00:02.019) 0:00:04.037 ********* 2025-05-14 14:24:52.716000 | orchestrator | changed: [testbed-manager] 2025-05-14 14:24:52.805124 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:24:53.328759 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:24:53.328972 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:24:53.331977 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:24:53.332224 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:24:53.332247 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:24:53.332523 | orchestrator | 2025-05-14 14:24:53.333504 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-14 14:24:53.334400 | orchestrator | Wednesday 14 May 2025 14:24:53 +0000 (0:00:01.203) 0:00:05.240 ********* 2025-05-14 14:24:54.570825 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:24:54.571403 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:24:54.572965 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:24:54.572994 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:24:54.573625 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:24:54.576232 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:24:54.576275 | orchestrator | ok: [testbed-manager] 2025-05-14 14:24:54.576287 | orchestrator | 2025-05-14 14:24:54.576384 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-14 14:24:54.576900 | orchestrator | Wednesday 14 May 2025 14:24:54 +0000 (0:00:01.242) 0:00:06.482 ********* 2025-05-14 14:24:54.828158 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:24:54.915537 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:24:54.996242 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:24:55.077449 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:24:55.210952 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:24:55.212066 | orchestrator | changed: [testbed-manager] 2025-05-14 14:24:55.213065 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:24:55.213972 | orchestrator | 2025-05-14 14:24:55.215667 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-14 14:24:55.216212 | orchestrator | Wednesday 14 May 2025 14:24:55 +0000 (0:00:00.643) 0:00:07.126 ********* 2025-05-14 14:25:08.532982 | orchestrator | changed: [testbed-manager] 2025-05-14 14:25:08.533105 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:25:08.533122 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:25:08.533135 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:25:08.533146 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:25:08.533157 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:25:08.533168 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:25:08.535085 | orchestrator | 2025-05-14 14:25:08.535127 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-14 14:25:08.535140 | orchestrator | Wednesday 14 May 2025 14:25:08 +0000 (0:00:13.313) 0:00:20.439 ********* 2025-05-14 14:25:09.705922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:25:09.706653 | orchestrator | 2025-05-14 14:25:09.707680 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-14 14:25:09.708806 | orchestrator | Wednesday 14 May 2025 14:25:09 +0000 (0:00:01.177) 0:00:21.617 ********* 2025-05-14 14:25:11.458087 | orchestrator | changed: [testbed-manager] 2025-05-14 14:25:11.459074 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:25:11.460414 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:25:11.462229 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:25:11.462713 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:25:11.464096 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:25:11.464928 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:25:11.466168 | orchestrator | 2025-05-14 14:25:11.466947 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:25:11.468135 | orchestrator | 2025-05-14 14:25:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:25:11.468490 | orchestrator | 2025-05-14 14:25:11 | INFO  | Please wait and do not abort execution. 2025-05-14 14:25:11.470081 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:25:11.470756 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.471727 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.472280 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.473573 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.474310 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.474888 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:11.475708 | orchestrator | 2025-05-14 14:25:11.476303 | orchestrator | Wednesday 14 May 2025 14:25:11 +0000 (0:00:01.756) 0:00:23.373 ********* 2025-05-14 14:25:11.476795 | orchestrator | =============================================================================== 2025-05-14 14:25:11.477086 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.31s 2025-05-14 14:25:11.477775 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2025-05-14 14:25:11.478314 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.76s 2025-05-14 14:25:11.478634 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2025-05-14 14:25:11.480071 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2025-05-14 14:25:11.481161 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2025-05-14 14:25:11.482135 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.13s 2025-05-14 14:25:11.483246 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-05-14 14:25:11.483520 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.64s 2025-05-14 14:25:11.985029 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-14 14:25:13.307680 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 14:25:13.307786 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 14:25:13.307801 | orchestrator | + local max_attempts=60 2025-05-14 14:25:13.307813 | orchestrator | + local name=ceph-ansible 2025-05-14 14:25:13.307838 | orchestrator | + local attempt_num=1 2025-05-14 14:25:13.307919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 14:25:13.340119 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:25:13.340201 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 14:25:13.340212 | orchestrator | + local max_attempts=60 2025-05-14 14:25:13.340221 | orchestrator | + local name=kolla-ansible 2025-05-14 14:25:13.340229 | orchestrator | + local attempt_num=1 2025-05-14 14:25:13.340294 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 14:25:13.365403 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:25:13.365496 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 14:25:13.365516 | orchestrator | + local max_attempts=60 2025-05-14 14:25:13.365533 | orchestrator | + local name=osism-ansible 2025-05-14 14:25:13.365548 | orchestrator | + local attempt_num=1 2025-05-14 14:25:13.366009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 14:25:13.397389 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 14:25:13.397453 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 14:25:13.397459 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 14:25:13.549311 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-14 14:25:13.705532 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-14 14:25:13.867112 | orchestrator | ARA in osism-ansible already disabled. 2025-05-14 14:25:14.010743 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-14 14:25:14.010957 | orchestrator | + osism apply gather-facts 2025-05-14 14:25:15.417605 | orchestrator | 2025-05-14 14:25:15 | INFO  | Task 685ecac3-787b-44b9-bb96-e67fd0633223 (gather-facts) was prepared for execution. 2025-05-14 14:25:15.417715 | orchestrator | 2025-05-14 14:25:15 | INFO  | It takes a moment until task 685ecac3-787b-44b9-bb96-e67fd0633223 (gather-facts) has been started and output is visible here. 2025-05-14 14:25:18.443436 | orchestrator | 2025-05-14 14:25:18.444134 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 14:25:18.444547 | orchestrator | 2025-05-14 14:25:18.445508 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:25:18.445690 | orchestrator | Wednesday 14 May 2025 14:25:18 +0000 (0:00:00.183) 0:00:00.183 ********* 2025-05-14 14:25:23.461862 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:25:23.462727 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:25:23.464154 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:25:23.466055 | orchestrator | ok: [testbed-manager] 2025-05-14 14:25:23.466144 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:25:23.467614 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:25:23.468880 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:25:23.470508 | orchestrator | 2025-05-14 14:25:23.471476 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 14:25:23.472331 | orchestrator | 2025-05-14 14:25:23.473104 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 14:25:23.473781 | orchestrator | Wednesday 14 May 2025 14:25:23 +0000 (0:00:05.020) 0:00:05.203 ********* 2025-05-14 14:25:23.611478 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:25:23.682491 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:25:23.757831 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:25:23.832408 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:25:23.909490 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:25:23.946576 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:25:23.946800 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:25:23.950469 | orchestrator | 2025-05-14 14:25:23.950516 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:25:23.950531 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.950544 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.950578 | orchestrator | 2025-05-14 14:25:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:25:23.950591 | orchestrator | 2025-05-14 14:25:23 | INFO  | Please wait and do not abort execution. 2025-05-14 14:25:23.951474 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.952927 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.953615 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.954848 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.955883 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:25:23.956275 | orchestrator | 2025-05-14 14:25:23.957631 | orchestrator | Wednesday 14 May 2025 14:25:23 +0000 (0:00:00.485) 0:00:05.689 ********* 2025-05-14 14:25:23.957842 | orchestrator | =============================================================================== 2025-05-14 14:25:23.959200 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.02s 2025-05-14 14:25:23.959784 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-05-14 14:25:24.481021 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-14 14:25:24.499327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-14 14:25:24.517688 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-14 14:25:24.531295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-14 14:25:24.550115 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-14 14:25:24.568022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-14 14:25:24.580684 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-14 14:25:24.591114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-14 14:25:24.600904 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-14 14:25:24.622228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-14 14:25:24.638309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-14 14:25:24.657580 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-14 14:25:24.679202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-14 14:25:24.697634 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-14 14:25:24.716273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-14 14:25:24.734705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-14 14:25:24.758132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-14 14:25:24.775320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-14 14:25:24.797555 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-14 14:25:24.814202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-14 14:25:24.829714 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-14 14:25:25.246356 | orchestrator | ok: Runtime: 0:24:37.487948 2025-05-14 14:25:25.361117 | 2025-05-14 14:25:25.361268 | TASK [Deploy services] 2025-05-14 14:25:25.897508 | orchestrator | skipping: Conditional result was False 2025-05-14 14:25:25.918782 | 2025-05-14 14:25:25.919018 | TASK [Deploy in a nutshell] 2025-05-14 14:25:26.651817 | orchestrator | + set -e 2025-05-14 14:25:26.652001 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 14:25:26.652025 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 14:25:26.652046 | orchestrator | ++ INTERACTIVE=false 2025-05-14 14:25:26.652068 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 14:25:26.652081 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 14:25:26.652109 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 14:25:26.652154 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 14:25:26.652181 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 14:25:26.652195 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 14:25:26.652211 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 14:25:26.652223 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 14:25:26.652240 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 14:25:26.652252 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 14:25:26.652273 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 14:25:26.652284 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 14:25:26.652299 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 14:25:26.652310 | orchestrator | ++ export ARA=false 2025-05-14 14:25:26.652321 | orchestrator | ++ ARA=false 2025-05-14 14:25:26.652332 | orchestrator | ++ export TEMPEST=false 2025-05-14 14:25:26.652344 | orchestrator | ++ TEMPEST=false 2025-05-14 14:25:26.652386 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 14:25:26.652397 | orchestrator | ++ IS_ZUUL=true 2025-05-14 14:25:26.652408 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:25:26.652420 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.165 2025-05-14 14:25:26.652430 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 14:25:26.652441 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 14:25:26.652452 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 14:25:26.652462 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 14:25:26.652473 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 14:25:26.652484 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 14:25:26.652495 | orchestrator | 2025-05-14 14:25:26.652506 | orchestrator | # PULL IMAGES 2025-05-14 14:25:26.652517 | orchestrator | 2025-05-14 14:25:26.652528 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 14:25:26.652539 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 14:25:26.652550 | orchestrator | + echo 2025-05-14 14:25:26.652561 | orchestrator | + echo '# PULL IMAGES' 2025-05-14 14:25:26.652572 | orchestrator | + echo 2025-05-14 14:25:26.653437 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 14:25:26.722134 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 14:25:26.722256 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-14 14:25:28.074723 | orchestrator | 2025-05-14 14:25:28 | INFO  | Trying to run play pull-images in environment custom 2025-05-14 14:25:28.121187 | orchestrator | 2025-05-14 14:25:28 | INFO  | Task 7d9ed4d9-88d6-4b8a-bdfb-b9b706e7cdc8 (pull-images) was prepared for execution. 2025-05-14 14:25:28.121281 | orchestrator | 2025-05-14 14:25:28 | INFO  | It takes a moment until task 7d9ed4d9-88d6-4b8a-bdfb-b9b706e7cdc8 (pull-images) has been started and output is visible here. 2025-05-14 14:25:31.072855 | orchestrator | 2025-05-14 14:25:31.075149 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-14 14:25:31.075438 | orchestrator | 2025-05-14 14:25:31.076288 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-14 14:25:31.077218 | orchestrator | Wednesday 14 May 2025 14:25:31 +0000 (0:00:00.138) 0:00:00.138 ********* 2025-05-14 14:26:07.565833 | orchestrator | changed: [testbed-manager] 2025-05-14 14:26:07.566080 | orchestrator | 2025-05-14 14:26:07.566781 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-14 14:26:07.566818 | orchestrator | Wednesday 14 May 2025 14:26:07 +0000 (0:00:36.495) 0:00:36.633 ********* 2025-05-14 14:26:52.928894 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-14 14:26:52.929050 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-14 14:26:52.929782 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-14 14:26:52.930201 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-14 14:26:52.930715 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-14 14:26:52.931157 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-14 14:26:52.931701 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-14 14:26:52.932356 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-14 14:26:52.933734 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-14 14:26:52.935188 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-14 14:26:52.935490 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-14 14:26:52.935950 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-14 14:26:52.936331 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-14 14:26:52.936722 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-14 14:26:52.937201 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-14 14:26:52.937880 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-14 14:26:52.938259 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-14 14:26:52.938671 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-14 14:26:52.938999 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-14 14:26:52.939479 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-14 14:26:52.939931 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-14 14:26:52.940539 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-14 14:26:52.940822 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-14 14:26:52.941310 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-14 14:26:52.941652 | orchestrator | 2025-05-14 14:26:52.942174 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:26:52.942686 | orchestrator | 2025-05-14 14:26:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:26:52.942711 | orchestrator | 2025-05-14 14:26:52 | INFO  | Please wait and do not abort execution. 2025-05-14 14:26:52.943097 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:26:52.943509 | orchestrator | 2025-05-14 14:26:52.943995 | orchestrator | Wednesday 14 May 2025 14:26:52 +0000 (0:00:45.363) 0:01:21.997 ********* 2025-05-14 14:26:52.944395 | orchestrator | =============================================================================== 2025-05-14 14:26:52.944887 | orchestrator | Pull other images ------------------------------------------------------ 45.36s 2025-05-14 14:26:52.945267 | orchestrator | Pull keystone image ---------------------------------------------------- 36.50s 2025-05-14 14:26:54.913728 | orchestrator | 2025-05-14 14:26:54 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-14 14:26:54.965835 | orchestrator | 2025-05-14 14:26:54 | INFO  | Task d5acf1c1-f55e-48db-8e19-49c896f2d2d2 (wipe-partitions) was prepared for execution. 2025-05-14 14:26:54.965955 | orchestrator | 2025-05-14 14:26:54 | INFO  | It takes a moment until task d5acf1c1-f55e-48db-8e19-49c896f2d2d2 (wipe-partitions) has been started and output is visible here. 2025-05-14 14:26:58.050896 | orchestrator | 2025-05-14 14:26:58.051933 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-14 14:26:58.051976 | orchestrator | 2025-05-14 14:26:58.052257 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-14 14:26:58.052537 | orchestrator | Wednesday 14 May 2025 14:26:58 +0000 (0:00:00.122) 0:00:00.122 ********* 2025-05-14 14:26:58.664762 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:26:58.664873 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:26:58.664888 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:26:58.664900 | orchestrator | 2025-05-14 14:26:58.664913 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-14 14:26:58.665147 | orchestrator | Wednesday 14 May 2025 14:26:58 +0000 (0:00:00.614) 0:00:00.737 ********* 2025-05-14 14:26:58.831393 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:26:58.918242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:26:58.918377 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:26:58.918392 | orchestrator | 2025-05-14 14:26:58.918460 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-14 14:26:58.918536 | orchestrator | Wednesday 14 May 2025 14:26:58 +0000 (0:00:00.254) 0:00:00.992 ********* 2025-05-14 14:26:59.697145 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:26:59.697251 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:26:59.697266 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:26:59.697623 | orchestrator | 2025-05-14 14:26:59.698338 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-14 14:26:59.698565 | orchestrator | Wednesday 14 May 2025 14:26:59 +0000 (0:00:00.773) 0:00:01.766 ********* 2025-05-14 14:26:59.855625 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:26:59.948954 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:26:59.951669 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:26:59.957330 | orchestrator | 2025-05-14 14:26:59.957382 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-14 14:26:59.957397 | orchestrator | Wednesday 14 May 2025 14:26:59 +0000 (0:00:00.256) 0:00:02.022 ********* 2025-05-14 14:27:01.230239 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 14:27:01.230290 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 14:27:01.230302 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 14:27:01.232159 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 14:27:01.232182 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 14:27:01.232193 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 14:27:01.232204 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 14:27:01.232219 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 14:27:01.232231 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 14:27:01.232242 | orchestrator | 2025-05-14 14:27:01.232254 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-14 14:27:01.232268 | orchestrator | Wednesday 14 May 2025 14:27:01 +0000 (0:00:01.281) 0:00:03.303 ********* 2025-05-14 14:27:02.546609 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 14:27:02.547473 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 14:27:02.548326 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 14:27:02.549750 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 14:27:02.552194 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 14:27:02.552231 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 14:27:02.552244 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 14:27:02.552771 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 14:27:02.553370 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 14:27:02.553915 | orchestrator | 2025-05-14 14:27:02.555102 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-14 14:27:02.555524 | orchestrator | Wednesday 14 May 2025 14:27:02 +0000 (0:00:01.315) 0:00:04.619 ********* 2025-05-14 14:27:04.740314 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 14:27:04.740417 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 14:27:04.740487 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 14:27:04.740503 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 14:27:04.740515 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 14:27:04.741009 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 14:27:04.741679 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 14:27:04.742297 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 14:27:04.744241 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 14:27:04.744268 | orchestrator | 2025-05-14 14:27:04.744473 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-14 14:27:04.744642 | orchestrator | Wednesday 14 May 2025 14:27:04 +0000 (0:00:02.188) 0:00:06.807 ********* 2025-05-14 14:27:05.382509 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:27:05.382612 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:27:05.382838 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:27:05.383696 | orchestrator | 2025-05-14 14:27:05.383900 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-14 14:27:05.385596 | orchestrator | Wednesday 14 May 2025 14:27:05 +0000 (0:00:00.648) 0:00:07.456 ********* 2025-05-14 14:27:06.049065 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:27:06.049214 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:27:06.049309 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:27:06.049587 | orchestrator | 2025-05-14 14:27:06.050501 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:27:06.050701 | orchestrator | 2025-05-14 14:27:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:27:06.050724 | orchestrator | 2025-05-14 14:27:06 | INFO  | Please wait and do not abort execution. 2025-05-14 14:27:06.051059 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:06.051197 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:06.051494 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:06.051670 | orchestrator | 2025-05-14 14:27:06.052025 | orchestrator | Wednesday 14 May 2025 14:27:06 +0000 (0:00:00.665) 0:00:08.121 ********* 2025-05-14 14:27:06.052320 | orchestrator | =============================================================================== 2025-05-14 14:27:06.052615 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2025-05-14 14:27:06.054349 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-05-14 14:27:06.054376 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2025-05-14 14:27:06.054391 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.77s 2025-05-14 14:27:06.054402 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-05-14 14:27:06.054413 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2025-05-14 14:27:06.054424 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2025-05-14 14:27:06.054545 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-05-14 14:27:06.054794 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-05-14 14:27:08.110231 | orchestrator | 2025-05-14 14:27:08 | INFO  | Task 42011a91-96e8-4665-b0b6-a95bdbe1b790 (facts) was prepared for execution. 2025-05-14 14:27:08.111940 | orchestrator | 2025-05-14 14:27:08 | INFO  | It takes a moment until task 42011a91-96e8-4665-b0b6-a95bdbe1b790 (facts) has been started and output is visible here. 2025-05-14 14:27:11.205536 | orchestrator | 2025-05-14 14:27:11.205974 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 14:27:11.208027 | orchestrator | 2025-05-14 14:27:11.209765 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 14:27:11.210102 | orchestrator | Wednesday 14 May 2025 14:27:11 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-05-14 14:27:12.219158 | orchestrator | ok: [testbed-manager] 2025-05-14 14:27:12.223263 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:27:12.223304 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:27:12.223317 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:27:12.223329 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:12.224516 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:12.225561 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:27:12.226800 | orchestrator | 2025-05-14 14:27:12.228039 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 14:27:12.229738 | orchestrator | Wednesday 14 May 2025 14:27:12 +0000 (0:00:01.013) 0:00:01.202 ********* 2025-05-14 14:27:12.403404 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:27:12.516991 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:27:12.634665 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:27:12.755684 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:27:12.871867 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:13.880947 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:13.881638 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:13.882620 | orchestrator | 2025-05-14 14:27:13.884158 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 14:27:13.889517 | orchestrator | 2025-05-14 14:27:13.890348 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:27:13.891628 | orchestrator | Wednesday 14 May 2025 14:27:13 +0000 (0:00:01.663) 0:00:02.866 ********* 2025-05-14 14:27:18.697821 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:27:18.698174 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:27:18.698946 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:27:18.700026 | orchestrator | ok: [testbed-manager] 2025-05-14 14:27:18.700167 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:18.701813 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:18.701835 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:27:18.702411 | orchestrator | 2025-05-14 14:27:18.703334 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 14:27:18.704023 | orchestrator | 2025-05-14 14:27:18.704779 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 14:27:18.705528 | orchestrator | Wednesday 14 May 2025 14:27:18 +0000 (0:00:04.806) 0:00:07.673 ********* 2025-05-14 14:27:19.064159 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:27:19.138074 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:27:19.223618 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:27:19.301689 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:27:19.388139 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:19.433823 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:19.433921 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:19.434636 | orchestrator | 2025-05-14 14:27:19.435741 | orchestrator | 2025-05-14 14:27:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:27:19.435767 | orchestrator | 2025-05-14 14:27:19 | INFO  | Please wait and do not abort execution. 2025-05-14 14:27:19.435778 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:27:19.436183 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.437496 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.438918 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.440350 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.440387 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.441122 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.441928 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:27:19.442650 | orchestrator | 2025-05-14 14:27:19.443432 | orchestrator | Wednesday 14 May 2025 14:27:19 +0000 (0:00:00.740) 0:00:08.413 ********* 2025-05-14 14:27:19.443907 | orchestrator | =============================================================================== 2025-05-14 14:27:19.444625 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-05-14 14:27:19.445246 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.66s 2025-05-14 14:27:19.445625 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-05-14 14:27:19.446383 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2025-05-14 14:27:21.779591 | orchestrator | 2025-05-14 14:27:21 | INFO  | Task 4fa735b7-95f2-49f8-b98d-9339250ca59a (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-14 14:27:21.779698 | orchestrator | 2025-05-14 14:27:21 | INFO  | It takes a moment until task 4fa735b7-95f2-49f8-b98d-9339250ca59a (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-14 14:27:25.196916 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:27:25.726939 | orchestrator | 2025-05-14 14:27:25.730995 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 14:27:25.732496 | orchestrator | 2025-05-14 14:27:25.734660 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:27:25.735909 | orchestrator | Wednesday 14 May 2025 14:27:25 +0000 (0:00:00.461) 0:00:00.461 ********* 2025-05-14 14:27:25.977867 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 14:27:25.980065 | orchestrator | 2025-05-14 14:27:25.981346 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:27:25.981518 | orchestrator | Wednesday 14 May 2025 14:27:25 +0000 (0:00:00.244) 0:00:00.705 ********* 2025-05-14 14:27:26.205224 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:26.205344 | orchestrator | 2025-05-14 14:27:26.205362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:26.205683 | orchestrator | Wednesday 14 May 2025 14:27:26 +0000 (0:00:00.234) 0:00:00.940 ********* 2025-05-14 14:27:26.709534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 14:27:26.713118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 14:27:26.713384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 14:27:26.714411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 14:27:26.714464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 14:27:26.714520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 14:27:26.714927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 14:27:26.715642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 14:27:26.716101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 14:27:26.717168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 14:27:26.717361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 14:27:26.717780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 14:27:26.719464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 14:27:26.719810 | orchestrator | 2025-05-14 14:27:26.720246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:26.720693 | orchestrator | Wednesday 14 May 2025 14:27:26 +0000 (0:00:00.505) 0:00:01.446 ********* 2025-05-14 14:27:26.915792 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:26.918181 | orchestrator | 2025-05-14 14:27:26.918398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:26.918975 | orchestrator | Wednesday 14 May 2025 14:27:26 +0000 (0:00:00.198) 0:00:01.644 ********* 2025-05-14 14:27:27.115749 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:27.116546 | orchestrator | 2025-05-14 14:27:27.117640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:27.122232 | orchestrator | Wednesday 14 May 2025 14:27:27 +0000 (0:00:00.209) 0:00:01.853 ********* 2025-05-14 14:27:27.342185 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:27.344381 | orchestrator | 2025-05-14 14:27:27.345265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:27.348354 | orchestrator | Wednesday 14 May 2025 14:27:27 +0000 (0:00:00.227) 0:00:02.080 ********* 2025-05-14 14:27:27.537113 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:27.538367 | orchestrator | 2025-05-14 14:27:27.539618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:27.542654 | orchestrator | Wednesday 14 May 2025 14:27:27 +0000 (0:00:00.195) 0:00:02.276 ********* 2025-05-14 14:27:27.735324 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:27.737659 | orchestrator | 2025-05-14 14:27:27.737922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:27.738578 | orchestrator | Wednesday 14 May 2025 14:27:27 +0000 (0:00:00.198) 0:00:02.475 ********* 2025-05-14 14:27:27.927117 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:27.927206 | orchestrator | 2025-05-14 14:27:27.927605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:27.928712 | orchestrator | Wednesday 14 May 2025 14:27:27 +0000 (0:00:00.189) 0:00:02.664 ********* 2025-05-14 14:27:28.120420 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:28.120561 | orchestrator | 2025-05-14 14:27:28.121971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:28.122269 | orchestrator | Wednesday 14 May 2025 14:27:28 +0000 (0:00:00.193) 0:00:02.857 ********* 2025-05-14 14:27:28.343612 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:28.344410 | orchestrator | 2025-05-14 14:27:28.345979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:28.347513 | orchestrator | Wednesday 14 May 2025 14:27:28 +0000 (0:00:00.223) 0:00:03.081 ********* 2025-05-14 14:27:28.948927 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580) 2025-05-14 14:27:28.949296 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580) 2025-05-14 14:27:28.949775 | orchestrator | 2025-05-14 14:27:28.951227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:28.952712 | orchestrator | Wednesday 14 May 2025 14:27:28 +0000 (0:00:00.604) 0:00:03.685 ********* 2025-05-14 14:27:29.830788 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4) 2025-05-14 14:27:29.833341 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4) 2025-05-14 14:27:29.834410 | orchestrator | 2025-05-14 14:27:29.834527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:29.834543 | orchestrator | Wednesday 14 May 2025 14:27:29 +0000 (0:00:00.883) 0:00:04.568 ********* 2025-05-14 14:27:30.403998 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e) 2025-05-14 14:27:30.404108 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e) 2025-05-14 14:27:30.404181 | orchestrator | 2025-05-14 14:27:30.404761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:30.404831 | orchestrator | Wednesday 14 May 2025 14:27:30 +0000 (0:00:00.572) 0:00:05.140 ********* 2025-05-14 14:27:30.937160 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4) 2025-05-14 14:27:30.939342 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4) 2025-05-14 14:27:30.939357 | orchestrator | 2025-05-14 14:27:30.939798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:30.939806 | orchestrator | Wednesday 14 May 2025 14:27:30 +0000 (0:00:00.533) 0:00:05.674 ********* 2025-05-14 14:27:31.346899 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:27:31.347577 | orchestrator | 2025-05-14 14:27:31.347613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:31.347627 | orchestrator | Wednesday 14 May 2025 14:27:31 +0000 (0:00:00.407) 0:00:06.082 ********* 2025-05-14 14:27:31.835266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 14:27:31.835360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 14:27:31.835396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 14:27:31.835406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 14:27:31.835415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 14:27:31.835425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 14:27:31.835434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 14:27:31.837277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 14:27:31.837325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 14:27:31.837334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 14:27:31.837624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 14:27:31.837726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 14:27:31.838160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 14:27:31.838383 | orchestrator | 2025-05-14 14:27:31.838792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:31.839096 | orchestrator | Wednesday 14 May 2025 14:27:31 +0000 (0:00:00.491) 0:00:06.573 ********* 2025-05-14 14:27:32.144317 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:32.144488 | orchestrator | 2025-05-14 14:27:32.146937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:32.147008 | orchestrator | Wednesday 14 May 2025 14:27:32 +0000 (0:00:00.308) 0:00:06.882 ********* 2025-05-14 14:27:32.370731 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:32.370831 | orchestrator | 2025-05-14 14:27:32.370847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:32.370859 | orchestrator | Wednesday 14 May 2025 14:27:32 +0000 (0:00:00.223) 0:00:07.106 ********* 2025-05-14 14:27:32.609830 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:32.609916 | orchestrator | 2025-05-14 14:27:32.609930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:32.609942 | orchestrator | Wednesday 14 May 2025 14:27:32 +0000 (0:00:00.242) 0:00:07.348 ********* 2025-05-14 14:27:32.819704 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:32.819791 | orchestrator | 2025-05-14 14:27:32.819806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:32.819819 | orchestrator | Wednesday 14 May 2025 14:27:32 +0000 (0:00:00.209) 0:00:07.558 ********* 2025-05-14 14:27:33.276273 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:33.276411 | orchestrator | 2025-05-14 14:27:33.277039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:33.277063 | orchestrator | Wednesday 14 May 2025 14:27:33 +0000 (0:00:00.457) 0:00:08.016 ********* 2025-05-14 14:27:33.498189 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:33.498667 | orchestrator | 2025-05-14 14:27:33.498691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:33.498704 | orchestrator | Wednesday 14 May 2025 14:27:33 +0000 (0:00:00.222) 0:00:08.238 ********* 2025-05-14 14:27:33.691787 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:33.692526 | orchestrator | 2025-05-14 14:27:33.692553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:33.692566 | orchestrator | Wednesday 14 May 2025 14:27:33 +0000 (0:00:00.194) 0:00:08.432 ********* 2025-05-14 14:27:33.876085 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:33.876170 | orchestrator | 2025-05-14 14:27:33.876785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:33.876996 | orchestrator | Wednesday 14 May 2025 14:27:33 +0000 (0:00:00.181) 0:00:08.614 ********* 2025-05-14 14:27:34.482340 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 14:27:34.483232 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 14:27:34.484681 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 14:27:34.484703 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 14:27:34.485614 | orchestrator | 2025-05-14 14:27:34.487056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:34.487080 | orchestrator | Wednesday 14 May 2025 14:27:34 +0000 (0:00:00.608) 0:00:09.223 ********* 2025-05-14 14:27:34.684383 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:34.685380 | orchestrator | 2025-05-14 14:27:34.687278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:34.687321 | orchestrator | Wednesday 14 May 2025 14:27:34 +0000 (0:00:00.200) 0:00:09.423 ********* 2025-05-14 14:27:34.921841 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:34.922996 | orchestrator | 2025-05-14 14:27:34.924474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:34.924535 | orchestrator | Wednesday 14 May 2025 14:27:34 +0000 (0:00:00.233) 0:00:09.656 ********* 2025-05-14 14:27:35.153384 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:35.155774 | orchestrator | 2025-05-14 14:27:35.157017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:35.157246 | orchestrator | Wednesday 14 May 2025 14:27:35 +0000 (0:00:00.235) 0:00:09.892 ********* 2025-05-14 14:27:35.342523 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:35.344260 | orchestrator | 2025-05-14 14:27:35.344364 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 14:27:35.344547 | orchestrator | Wednesday 14 May 2025 14:27:35 +0000 (0:00:00.187) 0:00:10.079 ********* 2025-05-14 14:27:35.481248 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-14 14:27:35.483320 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-14 14:27:35.486672 | orchestrator | 2025-05-14 14:27:35.487899 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 14:27:35.489035 | orchestrator | Wednesday 14 May 2025 14:27:35 +0000 (0:00:00.138) 0:00:10.217 ********* 2025-05-14 14:27:35.569237 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:35.570242 | orchestrator | 2025-05-14 14:27:35.571662 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 14:27:35.571692 | orchestrator | Wednesday 14 May 2025 14:27:35 +0000 (0:00:00.091) 0:00:10.309 ********* 2025-05-14 14:27:35.882323 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:35.882417 | orchestrator | 2025-05-14 14:27:35.882831 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 14:27:35.883375 | orchestrator | Wednesday 14 May 2025 14:27:35 +0000 (0:00:00.311) 0:00:10.621 ********* 2025-05-14 14:27:36.016620 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:36.018605 | orchestrator | 2025-05-14 14:27:36.019233 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 14:27:36.019327 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.133) 0:00:10.755 ********* 2025-05-14 14:27:36.154077 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:36.154245 | orchestrator | 2025-05-14 14:27:36.154627 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 14:27:36.155076 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.129) 0:00:10.884 ********* 2025-05-14 14:27:36.300385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}}) 2025-05-14 14:27:36.300529 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46afb65a-1642-5955-80d8-115babed40cc'}}) 2025-05-14 14:27:36.303353 | orchestrator | 2025-05-14 14:27:36.303598 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 14:27:36.303781 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.154) 0:00:11.039 ********* 2025-05-14 14:27:36.441099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}})  2025-05-14 14:27:36.441249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46afb65a-1642-5955-80d8-115babed40cc'}})  2025-05-14 14:27:36.441997 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:36.442307 | orchestrator | 2025-05-14 14:27:36.442948 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 14:27:36.444949 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.141) 0:00:11.180 ********* 2025-05-14 14:27:36.585105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}})  2025-05-14 14:27:36.585577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46afb65a-1642-5955-80d8-115babed40cc'}})  2025-05-14 14:27:36.586590 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:36.588589 | orchestrator | 2025-05-14 14:27:36.590080 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 14:27:36.590104 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.142) 0:00:11.323 ********* 2025-05-14 14:27:36.753569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}})  2025-05-14 14:27:36.755224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46afb65a-1642-5955-80d8-115babed40cc'}})  2025-05-14 14:27:36.755912 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:36.756227 | orchestrator | 2025-05-14 14:27:36.756741 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 14:27:36.757180 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.161) 0:00:11.484 ********* 2025-05-14 14:27:36.895742 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:36.896123 | orchestrator | 2025-05-14 14:27:36.898218 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 14:27:36.898243 | orchestrator | Wednesday 14 May 2025 14:27:36 +0000 (0:00:00.147) 0:00:11.632 ********* 2025-05-14 14:27:37.012726 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:27:37.015847 | orchestrator | 2025-05-14 14:27:37.016796 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 14:27:37.017704 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.117) 0:00:11.750 ********* 2025-05-14 14:27:37.129404 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:37.130359 | orchestrator | 2025-05-14 14:27:37.130559 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 14:27:37.131932 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.116) 0:00:11.866 ********* 2025-05-14 14:27:37.275926 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:37.277781 | orchestrator | 2025-05-14 14:27:37.281546 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 14:27:37.282048 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.149) 0:00:12.015 ********* 2025-05-14 14:27:37.421816 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:37.426914 | orchestrator | 2025-05-14 14:27:37.426948 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 14:27:37.426961 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.144) 0:00:12.159 ********* 2025-05-14 14:27:37.661601 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:27:37.661662 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:27:37.662357 | orchestrator |  "sdb": { 2025-05-14 14:27:37.664651 | orchestrator |  "osd_lvm_uuid": "5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd" 2025-05-14 14:27:37.666155 | orchestrator |  }, 2025-05-14 14:27:37.666607 | orchestrator |  "sdc": { 2025-05-14 14:27:37.666628 | orchestrator |  "osd_lvm_uuid": "46afb65a-1642-5955-80d8-115babed40cc" 2025-05-14 14:27:37.667720 | orchestrator |  } 2025-05-14 14:27:37.668095 | orchestrator |  } 2025-05-14 14:27:37.671151 | orchestrator | } 2025-05-14 14:27:37.671207 | orchestrator | 2025-05-14 14:27:37.671620 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 14:27:37.671655 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.242) 0:00:12.401 ********* 2025-05-14 14:27:37.787630 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:37.787724 | orchestrator | 2025-05-14 14:27:37.788653 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 14:27:37.791840 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.125) 0:00:12.527 ********* 2025-05-14 14:27:37.916432 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:37.916832 | orchestrator | 2025-05-14 14:27:37.917971 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 14:27:37.918814 | orchestrator | Wednesday 14 May 2025 14:27:37 +0000 (0:00:00.129) 0:00:12.657 ********* 2025-05-14 14:27:38.037019 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:27:38.038369 | orchestrator | 2025-05-14 14:27:38.039130 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 14:27:38.039154 | orchestrator | Wednesday 14 May 2025 14:27:38 +0000 (0:00:00.118) 0:00:12.776 ********* 2025-05-14 14:27:38.318948 | orchestrator | changed: [testbed-node-3] => { 2025-05-14 14:27:38.322305 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 14:27:38.324382 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:27:38.325405 | orchestrator |  "sdb": { 2025-05-14 14:27:38.326594 | orchestrator |  "osd_lvm_uuid": "5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd" 2025-05-14 14:27:38.327411 | orchestrator |  }, 2025-05-14 14:27:38.328650 | orchestrator |  "sdc": { 2025-05-14 14:27:38.329753 | orchestrator |  "osd_lvm_uuid": "46afb65a-1642-5955-80d8-115babed40cc" 2025-05-14 14:27:38.330350 | orchestrator |  } 2025-05-14 14:27:38.331107 | orchestrator |  }, 2025-05-14 14:27:38.331586 | orchestrator |  "lvm_volumes": [ 2025-05-14 14:27:38.332176 | orchestrator |  { 2025-05-14 14:27:38.332592 | orchestrator |  "data": "osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd", 2025-05-14 14:27:38.333062 | orchestrator |  "data_vg": "ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd" 2025-05-14 14:27:38.334077 | orchestrator |  }, 2025-05-14 14:27:38.334104 | orchestrator |  { 2025-05-14 14:27:38.334602 | orchestrator |  "data": "osd-block-46afb65a-1642-5955-80d8-115babed40cc", 2025-05-14 14:27:38.335079 | orchestrator |  "data_vg": "ceph-46afb65a-1642-5955-80d8-115babed40cc" 2025-05-14 14:27:38.335576 | orchestrator |  } 2025-05-14 14:27:38.336010 | orchestrator |  ] 2025-05-14 14:27:38.336535 | orchestrator |  } 2025-05-14 14:27:38.337010 | orchestrator | } 2025-05-14 14:27:38.338494 | orchestrator | 2025-05-14 14:27:38.338539 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 14:27:38.338601 | orchestrator | Wednesday 14 May 2025 14:27:38 +0000 (0:00:00.281) 0:00:13.057 ********* 2025-05-14 14:27:40.474655 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 14:27:40.478263 | orchestrator | 2025-05-14 14:27:40.480126 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 14:27:40.480337 | orchestrator | 2025-05-14 14:27:40.482113 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:27:40.484661 | orchestrator | Wednesday 14 May 2025 14:27:40 +0000 (0:00:02.145) 0:00:15.203 ********* 2025-05-14 14:27:40.780044 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 14:27:40.780312 | orchestrator | 2025-05-14 14:27:40.780897 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:27:40.781916 | orchestrator | Wednesday 14 May 2025 14:27:40 +0000 (0:00:00.316) 0:00:15.520 ********* 2025-05-14 14:27:41.034315 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:41.039769 | orchestrator | 2025-05-14 14:27:41.040987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:41.042009 | orchestrator | Wednesday 14 May 2025 14:27:41 +0000 (0:00:00.252) 0:00:15.772 ********* 2025-05-14 14:27:41.499661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 14:27:41.503610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 14:27:41.505565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 14:27:41.506408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 14:27:41.510810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 14:27:41.512651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 14:27:41.514200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 14:27:41.516781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 14:27:41.516831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 14:27:41.518782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 14:27:41.519795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 14:27:41.520650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 14:27:41.523935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 14:27:41.524243 | orchestrator | 2025-05-14 14:27:41.524626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:41.525093 | orchestrator | Wednesday 14 May 2025 14:27:41 +0000 (0:00:00.462) 0:00:16.234 ********* 2025-05-14 14:27:41.764213 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:41.766361 | orchestrator | 2025-05-14 14:27:41.767648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:41.768729 | orchestrator | Wednesday 14 May 2025 14:27:41 +0000 (0:00:00.269) 0:00:16.503 ********* 2025-05-14 14:27:41.995250 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:41.996191 | orchestrator | 2025-05-14 14:27:41.999025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:41.999073 | orchestrator | Wednesday 14 May 2025 14:27:41 +0000 (0:00:00.229) 0:00:16.733 ********* 2025-05-14 14:27:42.238664 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:42.238773 | orchestrator | 2025-05-14 14:27:42.238790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:42.238802 | orchestrator | Wednesday 14 May 2025 14:27:42 +0000 (0:00:00.239) 0:00:16.973 ********* 2025-05-14 14:27:42.485058 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:42.488876 | orchestrator | 2025-05-14 14:27:42.490108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:42.491101 | orchestrator | Wednesday 14 May 2025 14:27:42 +0000 (0:00:00.250) 0:00:17.223 ********* 2025-05-14 14:27:43.063208 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:43.064425 | orchestrator | 2025-05-14 14:27:43.065090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:43.067483 | orchestrator | Wednesday 14 May 2025 14:27:43 +0000 (0:00:00.576) 0:00:17.799 ********* 2025-05-14 14:27:43.281260 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:43.283632 | orchestrator | 2025-05-14 14:27:43.283688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:43.284750 | orchestrator | Wednesday 14 May 2025 14:27:43 +0000 (0:00:00.221) 0:00:18.021 ********* 2025-05-14 14:27:43.510193 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:43.510442 | orchestrator | 2025-05-14 14:27:43.511502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:43.514436 | orchestrator | Wednesday 14 May 2025 14:27:43 +0000 (0:00:00.227) 0:00:18.248 ********* 2025-05-14 14:27:43.730838 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:43.731319 | orchestrator | 2025-05-14 14:27:43.731814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:43.737394 | orchestrator | Wednesday 14 May 2025 14:27:43 +0000 (0:00:00.221) 0:00:18.469 ********* 2025-05-14 14:27:44.227739 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8) 2025-05-14 14:27:44.229441 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8) 2025-05-14 14:27:44.230063 | orchestrator | 2025-05-14 14:27:44.231420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:44.232838 | orchestrator | Wednesday 14 May 2025 14:27:44 +0000 (0:00:00.497) 0:00:18.966 ********* 2025-05-14 14:27:44.737975 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1) 2025-05-14 14:27:44.738148 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1) 2025-05-14 14:27:44.739681 | orchestrator | 2025-05-14 14:27:44.740588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:44.743692 | orchestrator | Wednesday 14 May 2025 14:27:44 +0000 (0:00:00.506) 0:00:19.473 ********* 2025-05-14 14:27:45.230691 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728) 2025-05-14 14:27:45.230755 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728) 2025-05-14 14:27:45.230763 | orchestrator | 2025-05-14 14:27:45.230958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:45.230972 | orchestrator | Wednesday 14 May 2025 14:27:45 +0000 (0:00:00.495) 0:00:19.968 ********* 2025-05-14 14:27:45.611303 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194) 2025-05-14 14:27:45.612668 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194) 2025-05-14 14:27:45.613361 | orchestrator | 2025-05-14 14:27:45.614799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:45.617467 | orchestrator | Wednesday 14 May 2025 14:27:45 +0000 (0:00:00.382) 0:00:20.351 ********* 2025-05-14 14:27:45.878832 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:27:45.879252 | orchestrator | 2025-05-14 14:27:45.879738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:45.880435 | orchestrator | Wednesday 14 May 2025 14:27:45 +0000 (0:00:00.267) 0:00:20.619 ********* 2025-05-14 14:27:46.530491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 14:27:46.530612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 14:27:46.530627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 14:27:46.530638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 14:27:46.530649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 14:27:46.530660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 14:27:46.530750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 14:27:46.531080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 14:27:46.531103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 14:27:46.531393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 14:27:46.531849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 14:27:46.531973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 14:27:46.532484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 14:27:46.532763 | orchestrator | 2025-05-14 14:27:46.533092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:46.533355 | orchestrator | Wednesday 14 May 2025 14:27:46 +0000 (0:00:00.648) 0:00:21.268 ********* 2025-05-14 14:27:46.734120 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:46.734269 | orchestrator | 2025-05-14 14:27:46.734526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:46.734590 | orchestrator | Wednesday 14 May 2025 14:27:46 +0000 (0:00:00.203) 0:00:21.472 ********* 2025-05-14 14:27:46.923926 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:46.924008 | orchestrator | 2025-05-14 14:27:46.924314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:46.925862 | orchestrator | Wednesday 14 May 2025 14:27:46 +0000 (0:00:00.188) 0:00:21.660 ********* 2025-05-14 14:27:47.133110 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:47.133585 | orchestrator | 2025-05-14 14:27:47.133992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:47.134323 | orchestrator | Wednesday 14 May 2025 14:27:47 +0000 (0:00:00.212) 0:00:21.873 ********* 2025-05-14 14:27:47.327558 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:47.327646 | orchestrator | 2025-05-14 14:27:47.327769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:47.327841 | orchestrator | Wednesday 14 May 2025 14:27:47 +0000 (0:00:00.194) 0:00:22.067 ********* 2025-05-14 14:27:47.516143 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:47.516334 | orchestrator | 2025-05-14 14:27:47.516588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:47.517037 | orchestrator | Wednesday 14 May 2025 14:27:47 +0000 (0:00:00.189) 0:00:22.256 ********* 2025-05-14 14:27:47.685518 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:47.687323 | orchestrator | 2025-05-14 14:27:47.688380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:47.688630 | orchestrator | Wednesday 14 May 2025 14:27:47 +0000 (0:00:00.166) 0:00:22.423 ********* 2025-05-14 14:27:47.879790 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:47.882437 | orchestrator | 2025-05-14 14:27:47.882990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:47.884742 | orchestrator | Wednesday 14 May 2025 14:27:47 +0000 (0:00:00.192) 0:00:22.616 ********* 2025-05-14 14:27:48.057424 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:48.057906 | orchestrator | 2025-05-14 14:27:48.059039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:48.059928 | orchestrator | Wednesday 14 May 2025 14:27:48 +0000 (0:00:00.179) 0:00:22.795 ********* 2025-05-14 14:27:48.800445 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 14:27:48.801017 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 14:27:48.802013 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 14:27:48.802739 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 14:27:48.803716 | orchestrator | 2025-05-14 14:27:48.804225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:48.805115 | orchestrator | Wednesday 14 May 2025 14:27:48 +0000 (0:00:00.740) 0:00:23.536 ********* 2025-05-14 14:27:49.382612 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:49.383055 | orchestrator | 2025-05-14 14:27:49.386556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:49.386595 | orchestrator | Wednesday 14 May 2025 14:27:49 +0000 (0:00:00.585) 0:00:24.122 ********* 2025-05-14 14:27:49.584205 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:49.584650 | orchestrator | 2025-05-14 14:27:49.586112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:49.587399 | orchestrator | Wednesday 14 May 2025 14:27:49 +0000 (0:00:00.199) 0:00:24.321 ********* 2025-05-14 14:27:49.755922 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:49.757057 | orchestrator | 2025-05-14 14:27:49.758313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:27:49.758924 | orchestrator | Wednesday 14 May 2025 14:27:49 +0000 (0:00:00.173) 0:00:24.494 ********* 2025-05-14 14:27:49.949019 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:49.949963 | orchestrator | 2025-05-14 14:27:49.950735 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 14:27:49.951622 | orchestrator | Wednesday 14 May 2025 14:27:49 +0000 (0:00:00.194) 0:00:24.689 ********* 2025-05-14 14:27:50.107904 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-14 14:27:50.108712 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-14 14:27:50.109321 | orchestrator | 2025-05-14 14:27:50.110221 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 14:27:50.110702 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.157) 0:00:24.847 ********* 2025-05-14 14:27:50.236993 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:50.237077 | orchestrator | 2025-05-14 14:27:50.237092 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 14:27:50.237756 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.128) 0:00:24.975 ********* 2025-05-14 14:27:50.361392 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:50.361684 | orchestrator | 2025-05-14 14:27:50.362583 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 14:27:50.364936 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.126) 0:00:25.101 ********* 2025-05-14 14:27:50.485760 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:50.486686 | orchestrator | 2025-05-14 14:27:50.487804 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 14:27:50.488863 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.124) 0:00:25.225 ********* 2025-05-14 14:27:50.625919 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:50.626633 | orchestrator | 2025-05-14 14:27:50.627944 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 14:27:50.628251 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.139) 0:00:25.365 ********* 2025-05-14 14:27:50.817963 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}}) 2025-05-14 14:27:50.818538 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6248da54-4321-5f95-9f37-ef0f81563cc8'}}) 2025-05-14 14:27:50.821351 | orchestrator | 2025-05-14 14:27:50.821716 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 14:27:50.822512 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.190) 0:00:25.555 ********* 2025-05-14 14:27:50.991386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}})  2025-05-14 14:27:50.991575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6248da54-4321-5f95-9f37-ef0f81563cc8'}})  2025-05-14 14:27:50.991742 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:50.991820 | orchestrator | 2025-05-14 14:27:50.992647 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 14:27:50.995762 | orchestrator | Wednesday 14 May 2025 14:27:50 +0000 (0:00:00.173) 0:00:25.729 ********* 2025-05-14 14:27:51.341855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}})  2025-05-14 14:27:51.342607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6248da54-4321-5f95-9f37-ef0f81563cc8'}})  2025-05-14 14:27:51.343251 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:51.347134 | orchestrator | 2025-05-14 14:27:51.348093 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 14:27:51.348445 | orchestrator | Wednesday 14 May 2025 14:27:51 +0000 (0:00:00.351) 0:00:26.080 ********* 2025-05-14 14:27:51.506184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}})  2025-05-14 14:27:51.507260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6248da54-4321-5f95-9f37-ef0f81563cc8'}})  2025-05-14 14:27:51.508659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:51.510131 | orchestrator | 2025-05-14 14:27:51.511154 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 14:27:51.512204 | orchestrator | Wednesday 14 May 2025 14:27:51 +0000 (0:00:00.164) 0:00:26.245 ********* 2025-05-14 14:27:51.646814 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:51.647332 | orchestrator | 2025-05-14 14:27:51.649167 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 14:27:51.649883 | orchestrator | Wednesday 14 May 2025 14:27:51 +0000 (0:00:00.140) 0:00:26.385 ********* 2025-05-14 14:27:51.796903 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:27:51.797673 | orchestrator | 2025-05-14 14:27:51.798937 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 14:27:51.799952 | orchestrator | Wednesday 14 May 2025 14:27:51 +0000 (0:00:00.149) 0:00:26.534 ********* 2025-05-14 14:27:51.934517 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:51.938831 | orchestrator | 2025-05-14 14:27:51.941041 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 14:27:51.941766 | orchestrator | Wednesday 14 May 2025 14:27:51 +0000 (0:00:00.136) 0:00:26.671 ********* 2025-05-14 14:27:52.052746 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:52.054189 | orchestrator | 2025-05-14 14:27:52.056579 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 14:27:52.056616 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.119) 0:00:26.791 ********* 2025-05-14 14:27:52.206349 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:52.206617 | orchestrator | 2025-05-14 14:27:52.207738 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 14:27:52.208972 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.147) 0:00:26.939 ********* 2025-05-14 14:27:52.351843 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:27:52.353245 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:27:52.354535 | orchestrator |  "sdb": { 2025-05-14 14:27:52.354873 | orchestrator |  "osd_lvm_uuid": "904dffa8-69ed-5eff-9e62-bfdd56e5c3c6" 2025-05-14 14:27:52.356216 | orchestrator |  }, 2025-05-14 14:27:52.357410 | orchestrator |  "sdc": { 2025-05-14 14:27:52.359301 | orchestrator |  "osd_lvm_uuid": "6248da54-4321-5f95-9f37-ef0f81563cc8" 2025-05-14 14:27:52.359981 | orchestrator |  } 2025-05-14 14:27:52.361091 | orchestrator |  } 2025-05-14 14:27:52.361428 | orchestrator | } 2025-05-14 14:27:52.362978 | orchestrator | 2025-05-14 14:27:52.363141 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 14:27:52.363645 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.150) 0:00:27.089 ********* 2025-05-14 14:27:52.489745 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:52.489860 | orchestrator | 2025-05-14 14:27:52.490890 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 14:27:52.490930 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.137) 0:00:27.227 ********* 2025-05-14 14:27:52.627612 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:52.628386 | orchestrator | 2025-05-14 14:27:52.629145 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 14:27:52.630338 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.139) 0:00:27.366 ********* 2025-05-14 14:27:52.768561 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:27:52.769638 | orchestrator | 2025-05-14 14:27:52.772447 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 14:27:52.776437 | orchestrator | Wednesday 14 May 2025 14:27:52 +0000 (0:00:00.140) 0:00:27.506 ********* 2025-05-14 14:27:53.230921 | orchestrator | changed: [testbed-node-4] => { 2025-05-14 14:27:53.231021 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 14:27:53.235636 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:27:53.235669 | orchestrator |  "sdb": { 2025-05-14 14:27:53.235682 | orchestrator |  "osd_lvm_uuid": "904dffa8-69ed-5eff-9e62-bfdd56e5c3c6" 2025-05-14 14:27:53.235694 | orchestrator |  }, 2025-05-14 14:27:53.235706 | orchestrator |  "sdc": { 2025-05-14 14:27:53.236360 | orchestrator |  "osd_lvm_uuid": "6248da54-4321-5f95-9f37-ef0f81563cc8" 2025-05-14 14:27:53.236851 | orchestrator |  } 2025-05-14 14:27:53.237911 | orchestrator |  }, 2025-05-14 14:27:53.238178 | orchestrator |  "lvm_volumes": [ 2025-05-14 14:27:53.238809 | orchestrator |  { 2025-05-14 14:27:53.239281 | orchestrator |  "data": "osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6", 2025-05-14 14:27:53.240197 | orchestrator |  "data_vg": "ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6" 2025-05-14 14:27:53.240597 | orchestrator |  }, 2025-05-14 14:27:53.241102 | orchestrator |  { 2025-05-14 14:27:53.241744 | orchestrator |  "data": "osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8", 2025-05-14 14:27:53.242556 | orchestrator |  "data_vg": "ceph-6248da54-4321-5f95-9f37-ef0f81563cc8" 2025-05-14 14:27:53.243243 | orchestrator |  } 2025-05-14 14:27:53.243344 | orchestrator |  ] 2025-05-14 14:27:53.244074 | orchestrator |  } 2025-05-14 14:27:53.244547 | orchestrator | } 2025-05-14 14:27:53.244988 | orchestrator | 2025-05-14 14:27:53.246007 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 14:27:53.246108 | orchestrator | Wednesday 14 May 2025 14:27:53 +0000 (0:00:00.459) 0:00:27.966 ********* 2025-05-14 14:27:54.864521 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 14:27:54.866872 | orchestrator | 2025-05-14 14:27:54.866903 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 14:27:54.869425 | orchestrator | 2025-05-14 14:27:54.870822 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:27:54.870847 | orchestrator | Wednesday 14 May 2025 14:27:54 +0000 (0:00:01.634) 0:00:29.600 ********* 2025-05-14 14:27:55.165823 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 14:27:55.166751 | orchestrator | 2025-05-14 14:27:55.167863 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:27:55.168912 | orchestrator | Wednesday 14 May 2025 14:27:55 +0000 (0:00:00.304) 0:00:29.904 ********* 2025-05-14 14:27:55.432982 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:27:55.433481 | orchestrator | 2025-05-14 14:27:55.434427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:55.437055 | orchestrator | Wednesday 14 May 2025 14:27:55 +0000 (0:00:00.260) 0:00:30.165 ********* 2025-05-14 14:27:56.207944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 14:27:56.209128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 14:27:56.210251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 14:27:56.213537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 14:27:56.213562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 14:27:56.213574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 14:27:56.214358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 14:27:56.215303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 14:27:56.216043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 14:27:56.216994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 14:27:56.217721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 14:27:56.218374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 14:27:56.219011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 14:27:56.219711 | orchestrator | 2025-05-14 14:27:56.220752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:56.221032 | orchestrator | Wednesday 14 May 2025 14:27:56 +0000 (0:00:00.781) 0:00:30.946 ********* 2025-05-14 14:27:56.451490 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:56.452079 | orchestrator | 2025-05-14 14:27:56.453239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:56.453260 | orchestrator | Wednesday 14 May 2025 14:27:56 +0000 (0:00:00.244) 0:00:31.191 ********* 2025-05-14 14:27:56.664540 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:56.665573 | orchestrator | 2025-05-14 14:27:56.667085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:56.668193 | orchestrator | Wednesday 14 May 2025 14:27:56 +0000 (0:00:00.211) 0:00:31.402 ********* 2025-05-14 14:27:56.897595 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:56.898387 | orchestrator | 2025-05-14 14:27:56.899480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:56.900281 | orchestrator | Wednesday 14 May 2025 14:27:56 +0000 (0:00:00.233) 0:00:31.636 ********* 2025-05-14 14:27:57.114768 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:57.115952 | orchestrator | 2025-05-14 14:27:57.117542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:57.118841 | orchestrator | Wednesday 14 May 2025 14:27:57 +0000 (0:00:00.215) 0:00:31.851 ********* 2025-05-14 14:27:57.365866 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:57.366565 | orchestrator | 2025-05-14 14:27:57.367189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:57.368678 | orchestrator | Wednesday 14 May 2025 14:27:57 +0000 (0:00:00.249) 0:00:32.100 ********* 2025-05-14 14:27:57.568115 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:57.568218 | orchestrator | 2025-05-14 14:27:57.572745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:57.573921 | orchestrator | Wednesday 14 May 2025 14:27:57 +0000 (0:00:00.204) 0:00:32.304 ********* 2025-05-14 14:27:57.779290 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:57.780098 | orchestrator | 2025-05-14 14:27:57.780530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:57.782896 | orchestrator | Wednesday 14 May 2025 14:27:57 +0000 (0:00:00.212) 0:00:32.517 ********* 2025-05-14 14:27:57.990596 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:27:57.991288 | orchestrator | 2025-05-14 14:27:57.993475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:57.995718 | orchestrator | Wednesday 14 May 2025 14:27:57 +0000 (0:00:00.209) 0:00:32.727 ********* 2025-05-14 14:27:58.637761 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2) 2025-05-14 14:27:58.637964 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2) 2025-05-14 14:27:58.638776 | orchestrator | 2025-05-14 14:27:58.639681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:58.640511 | orchestrator | Wednesday 14 May 2025 14:27:58 +0000 (0:00:00.648) 0:00:33.376 ********* 2025-05-14 14:27:59.415122 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640) 2025-05-14 14:27:59.415268 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640) 2025-05-14 14:27:59.416291 | orchestrator | 2025-05-14 14:27:59.416813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:59.417166 | orchestrator | Wednesday 14 May 2025 14:27:59 +0000 (0:00:00.773) 0:00:34.149 ********* 2025-05-14 14:27:59.850305 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb) 2025-05-14 14:27:59.851410 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb) 2025-05-14 14:27:59.852787 | orchestrator | 2025-05-14 14:27:59.854118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:27:59.854709 | orchestrator | Wednesday 14 May 2025 14:27:59 +0000 (0:00:00.437) 0:00:34.587 ********* 2025-05-14 14:28:00.286014 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb) 2025-05-14 14:28:00.287227 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb) 2025-05-14 14:28:00.291363 | orchestrator | 2025-05-14 14:28:00.291989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:28:00.292796 | orchestrator | Wednesday 14 May 2025 14:28:00 +0000 (0:00:00.435) 0:00:35.023 ********* 2025-05-14 14:28:00.639695 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:28:00.640565 | orchestrator | 2025-05-14 14:28:00.641399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:00.642353 | orchestrator | Wednesday 14 May 2025 14:28:00 +0000 (0:00:00.356) 0:00:35.379 ********* 2025-05-14 14:28:01.066801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 14:28:01.066960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 14:28:01.068834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 14:28:01.069750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 14:28:01.071290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 14:28:01.071961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 14:28:01.072416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 14:28:01.073187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 14:28:01.073680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 14:28:01.074108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 14:28:01.074772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 14:28:01.075512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 14:28:01.075650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 14:28:01.076069 | orchestrator | 2025-05-14 14:28:01.076475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:01.076682 | orchestrator | Wednesday 14 May 2025 14:28:01 +0000 (0:00:00.425) 0:00:35.804 ********* 2025-05-14 14:28:01.287705 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:01.288267 | orchestrator | 2025-05-14 14:28:01.289191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:01.290134 | orchestrator | Wednesday 14 May 2025 14:28:01 +0000 (0:00:00.221) 0:00:36.026 ********* 2025-05-14 14:28:01.538775 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:01.539358 | orchestrator | 2025-05-14 14:28:01.539724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:01.540395 | orchestrator | Wednesday 14 May 2025 14:28:01 +0000 (0:00:00.251) 0:00:36.277 ********* 2025-05-14 14:28:01.763984 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:01.764138 | orchestrator | 2025-05-14 14:28:01.765112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:01.766093 | orchestrator | Wednesday 14 May 2025 14:28:01 +0000 (0:00:00.224) 0:00:36.502 ********* 2025-05-14 14:28:01.973174 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:01.973277 | orchestrator | 2025-05-14 14:28:01.974070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:01.977954 | orchestrator | Wednesday 14 May 2025 14:28:01 +0000 (0:00:00.208) 0:00:36.710 ********* 2025-05-14 14:28:02.656925 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:02.657037 | orchestrator | 2025-05-14 14:28:02.661065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:02.661146 | orchestrator | Wednesday 14 May 2025 14:28:02 +0000 (0:00:00.681) 0:00:37.392 ********* 2025-05-14 14:28:02.877001 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:02.877741 | orchestrator | 2025-05-14 14:28:02.878888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:02.879522 | orchestrator | Wednesday 14 May 2025 14:28:02 +0000 (0:00:00.223) 0:00:37.615 ********* 2025-05-14 14:28:03.087273 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:03.087372 | orchestrator | 2025-05-14 14:28:03.088218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:03.089163 | orchestrator | Wednesday 14 May 2025 14:28:03 +0000 (0:00:00.209) 0:00:37.824 ********* 2025-05-14 14:28:03.291256 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:03.291528 | orchestrator | 2025-05-14 14:28:03.292660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:03.293522 | orchestrator | Wednesday 14 May 2025 14:28:03 +0000 (0:00:00.206) 0:00:38.030 ********* 2025-05-14 14:28:03.910978 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 14:28:03.911086 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 14:28:03.911273 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 14:28:03.911738 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 14:28:03.912627 | orchestrator | 2025-05-14 14:28:03.912902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:03.913269 | orchestrator | Wednesday 14 May 2025 14:28:03 +0000 (0:00:00.613) 0:00:38.644 ********* 2025-05-14 14:28:04.111962 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:04.112066 | orchestrator | 2025-05-14 14:28:04.112499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:04.113090 | orchestrator | Wednesday 14 May 2025 14:28:04 +0000 (0:00:00.205) 0:00:38.850 ********* 2025-05-14 14:28:04.312097 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:04.312373 | orchestrator | 2025-05-14 14:28:04.313133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:04.314101 | orchestrator | Wednesday 14 May 2025 14:28:04 +0000 (0:00:00.200) 0:00:39.051 ********* 2025-05-14 14:28:04.527105 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:04.527208 | orchestrator | 2025-05-14 14:28:04.527275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:28:04.527761 | orchestrator | Wednesday 14 May 2025 14:28:04 +0000 (0:00:00.212) 0:00:39.263 ********* 2025-05-14 14:28:04.751395 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:04.752917 | orchestrator | 2025-05-14 14:28:04.753815 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 14:28:04.755351 | orchestrator | Wednesday 14 May 2025 14:28:04 +0000 (0:00:00.226) 0:00:39.490 ********* 2025-05-14 14:28:04.933579 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-14 14:28:04.934174 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-14 14:28:04.934617 | orchestrator | 2025-05-14 14:28:04.935290 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 14:28:04.935926 | orchestrator | Wednesday 14 May 2025 14:28:04 +0000 (0:00:00.180) 0:00:39.671 ********* 2025-05-14 14:28:05.248576 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:05.249613 | orchestrator | 2025-05-14 14:28:05.251900 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 14:28:05.252636 | orchestrator | Wednesday 14 May 2025 14:28:05 +0000 (0:00:00.314) 0:00:39.985 ********* 2025-05-14 14:28:05.386957 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:05.387630 | orchestrator | 2025-05-14 14:28:05.388928 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 14:28:05.389983 | orchestrator | Wednesday 14 May 2025 14:28:05 +0000 (0:00:00.140) 0:00:40.126 ********* 2025-05-14 14:28:05.519274 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:05.520147 | orchestrator | 2025-05-14 14:28:05.520712 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 14:28:05.521508 | orchestrator | Wednesday 14 May 2025 14:28:05 +0000 (0:00:00.132) 0:00:40.258 ********* 2025-05-14 14:28:05.668996 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:28:05.669634 | orchestrator | 2025-05-14 14:28:05.670579 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 14:28:05.671684 | orchestrator | Wednesday 14 May 2025 14:28:05 +0000 (0:00:00.148) 0:00:40.407 ********* 2025-05-14 14:28:05.867413 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dde3cc5c-c032-592e-96b0-b740b8614a8d'}}) 2025-05-14 14:28:05.867583 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5402478b-0937-58a5-a80f-00ed6e381d0d'}}) 2025-05-14 14:28:05.867704 | orchestrator | 2025-05-14 14:28:05.868085 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 14:28:05.868692 | orchestrator | Wednesday 14 May 2025 14:28:05 +0000 (0:00:00.198) 0:00:40.606 ********* 2025-05-14 14:28:06.028093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dde3cc5c-c032-592e-96b0-b740b8614a8d'}})  2025-05-14 14:28:06.028807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5402478b-0937-58a5-a80f-00ed6e381d0d'}})  2025-05-14 14:28:06.029912 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:06.030250 | orchestrator | 2025-05-14 14:28:06.033130 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 14:28:06.033776 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.160) 0:00:40.767 ********* 2025-05-14 14:28:06.199747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dde3cc5c-c032-592e-96b0-b740b8614a8d'}})  2025-05-14 14:28:06.200353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5402478b-0937-58a5-a80f-00ed6e381d0d'}})  2025-05-14 14:28:06.201735 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:06.203242 | orchestrator | 2025-05-14 14:28:06.204581 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 14:28:06.205578 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.170) 0:00:40.937 ********* 2025-05-14 14:28:06.353219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dde3cc5c-c032-592e-96b0-b740b8614a8d'}})  2025-05-14 14:28:06.353366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5402478b-0937-58a5-a80f-00ed6e381d0d'}})  2025-05-14 14:28:06.354242 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:06.354921 | orchestrator | 2025-05-14 14:28:06.355438 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 14:28:06.357195 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.153) 0:00:41.091 ********* 2025-05-14 14:28:06.490971 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:28:06.491808 | orchestrator | 2025-05-14 14:28:06.492373 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 14:28:06.493537 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.138) 0:00:41.230 ********* 2025-05-14 14:28:06.632255 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:28:06.632918 | orchestrator | 2025-05-14 14:28:06.633324 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 14:28:06.634140 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.141) 0:00:41.371 ********* 2025-05-14 14:28:06.763672 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:06.764221 | orchestrator | 2025-05-14 14:28:06.765211 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 14:28:06.765938 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.130) 0:00:41.502 ********* 2025-05-14 14:28:06.896076 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:06.896260 | orchestrator | 2025-05-14 14:28:06.897114 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 14:28:06.897690 | orchestrator | Wednesday 14 May 2025 14:28:06 +0000 (0:00:00.132) 0:00:41.635 ********* 2025-05-14 14:28:07.240880 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:07.242133 | orchestrator | 2025-05-14 14:28:07.244693 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 14:28:07.244773 | orchestrator | Wednesday 14 May 2025 14:28:07 +0000 (0:00:00.343) 0:00:41.978 ********* 2025-05-14 14:28:07.389173 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:28:07.390623 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:28:07.393239 | orchestrator |  "sdb": { 2025-05-14 14:28:07.393891 | orchestrator |  "osd_lvm_uuid": "dde3cc5c-c032-592e-96b0-b740b8614a8d" 2025-05-14 14:28:07.395054 | orchestrator |  }, 2025-05-14 14:28:07.396358 | orchestrator |  "sdc": { 2025-05-14 14:28:07.396739 | orchestrator |  "osd_lvm_uuid": "5402478b-0937-58a5-a80f-00ed6e381d0d" 2025-05-14 14:28:07.397312 | orchestrator |  } 2025-05-14 14:28:07.397864 | orchestrator |  } 2025-05-14 14:28:07.398540 | orchestrator | } 2025-05-14 14:28:07.399236 | orchestrator | 2025-05-14 14:28:07.399693 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 14:28:07.400148 | orchestrator | Wednesday 14 May 2025 14:28:07 +0000 (0:00:00.148) 0:00:42.126 ********* 2025-05-14 14:28:07.529007 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:07.529271 | orchestrator | 2025-05-14 14:28:07.529906 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 14:28:07.530440 | orchestrator | Wednesday 14 May 2025 14:28:07 +0000 (0:00:00.141) 0:00:42.268 ********* 2025-05-14 14:28:07.667590 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:07.668163 | orchestrator | 2025-05-14 14:28:07.668955 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 14:28:07.669922 | orchestrator | Wednesday 14 May 2025 14:28:07 +0000 (0:00:00.138) 0:00:42.406 ********* 2025-05-14 14:28:07.824007 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:28:07.824348 | orchestrator | 2025-05-14 14:28:07.825371 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 14:28:07.826399 | orchestrator | Wednesday 14 May 2025 14:28:07 +0000 (0:00:00.156) 0:00:42.563 ********* 2025-05-14 14:28:08.097260 | orchestrator | changed: [testbed-node-5] => { 2025-05-14 14:28:08.097422 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 14:28:08.098967 | orchestrator |  "ceph_osd_devices": { 2025-05-14 14:28:08.102492 | orchestrator |  "sdb": { 2025-05-14 14:28:08.102949 | orchestrator |  "osd_lvm_uuid": "dde3cc5c-c032-592e-96b0-b740b8614a8d" 2025-05-14 14:28:08.103965 | orchestrator |  }, 2025-05-14 14:28:08.104730 | orchestrator |  "sdc": { 2025-05-14 14:28:08.104959 | orchestrator |  "osd_lvm_uuid": "5402478b-0937-58a5-a80f-00ed6e381d0d" 2025-05-14 14:28:08.105540 | orchestrator |  } 2025-05-14 14:28:08.106198 | orchestrator |  }, 2025-05-14 14:28:08.106499 | orchestrator |  "lvm_volumes": [ 2025-05-14 14:28:08.107070 | orchestrator |  { 2025-05-14 14:28:08.107517 | orchestrator |  "data": "osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d", 2025-05-14 14:28:08.108282 | orchestrator |  "data_vg": "ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d" 2025-05-14 14:28:08.108623 | orchestrator |  }, 2025-05-14 14:28:08.109432 | orchestrator |  { 2025-05-14 14:28:08.109823 | orchestrator |  "data": "osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d", 2025-05-14 14:28:08.110573 | orchestrator |  "data_vg": "ceph-5402478b-0937-58a5-a80f-00ed6e381d0d" 2025-05-14 14:28:08.110651 | orchestrator |  } 2025-05-14 14:28:08.111538 | orchestrator |  ] 2025-05-14 14:28:08.111831 | orchestrator |  } 2025-05-14 14:28:08.112227 | orchestrator | } 2025-05-14 14:28:08.112772 | orchestrator | 2025-05-14 14:28:08.113181 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 14:28:08.113527 | orchestrator | Wednesday 14 May 2025 14:28:08 +0000 (0:00:00.271) 0:00:42.835 ********* 2025-05-14 14:28:09.168937 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 14:28:09.169044 | orchestrator | 2025-05-14 14:28:09.170231 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:28:09.170966 | orchestrator | 2025-05-14 14:28:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:28:09.170991 | orchestrator | 2025-05-14 14:28:09 | INFO  | Please wait and do not abort execution. 2025-05-14 14:28:09.172205 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 14:28:09.173899 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 14:28:09.174424 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 14:28:09.175534 | orchestrator | 2025-05-14 14:28:09.176622 | orchestrator | 2025-05-14 14:28:09.177360 | orchestrator | 2025-05-14 14:28:09.178105 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:28:09.179039 | orchestrator | Wednesday 14 May 2025 14:28:09 +0000 (0:00:01.071) 0:00:43.906 ********* 2025-05-14 14:28:09.180486 | orchestrator | =============================================================================== 2025-05-14 14:28:09.180784 | orchestrator | Write configuration file ------------------------------------------------ 4.85s 2025-05-14 14:28:09.182411 | orchestrator | Add known links to the list of available block devices ------------------ 1.75s 2025-05-14 14:28:09.182684 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-05-14 14:28:09.183396 | orchestrator | Print configuration data ------------------------------------------------ 1.01s 2025-05-14 14:28:09.184264 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-05-14 14:28:09.185316 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-05-14 14:28:09.185721 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-05-14 14:28:09.186552 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2025-05-14 14:28:09.187207 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-05-14 14:28:09.187851 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-05-14 14:28:09.188302 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-05-14 14:28:09.188661 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-14 14:28:09.189130 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.64s 2025-05-14 14:28:09.189408 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-14 14:28:09.190109 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-14 14:28:09.190895 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-05-14 14:28:09.191412 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-05-14 14:28:09.193150 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.58s 2025-05-14 14:28:09.193284 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-05-14 14:28:09.194290 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-05-14 14:28:21.268385 | orchestrator | 2025-05-14 14:28:21 | INFO  | Task 2857009f-0081-44fc-a709-0953a259bfd5 is running in background. Output coming soon. 2025-05-14 14:28:56.140244 | orchestrator | 2025-05-14 14:28:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-14 14:28:56.140355 | orchestrator | 2025-05-14 14:28:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-14 14:28:56.140370 | orchestrator | 2025-05-14 14:28:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-14 14:28:56.140383 | orchestrator | 2025-05-14 14:28:48 | INFO  | Handling group overwrites in 99-overwrite 2025-05-14 14:28:56.140395 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group frr:children from 60-generic 2025-05-14 14:28:56.140406 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group storage:children from 50-kolla 2025-05-14 14:28:56.140417 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-14 14:28:56.140428 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-14 14:28:56.140501 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-14 14:28:56.140515 | orchestrator | 2025-05-14 14:28:48 | INFO  | Handling group overwrites in 20-roles 2025-05-14 14:28:56.140526 | orchestrator | 2025-05-14 14:28:48 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-14 14:28:56.140537 | orchestrator | 2025-05-14 14:28:49 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-14 14:28:56.140548 | orchestrator | 2025-05-14 14:28:55 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-14 14:28:57.749073 | orchestrator | 2025-05-14 14:28:57 | INFO  | Task 25589fb1-fdb6-4a63-8726-305fe892aa38 (ceph-create-lvm-devices) was prepared for execution. 2025-05-14 14:28:57.749180 | orchestrator | 2025-05-14 14:28:57 | INFO  | It takes a moment until task 25589fb1-fdb6-4a63-8726-305fe892aa38 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-14 14:29:00.673688 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:29:01.161005 | orchestrator | 2025-05-14 14:29:01.161105 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 14:29:01.161116 | orchestrator | 2025-05-14 14:29:01.161123 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:29:01.161131 | orchestrator | Wednesday 14 May 2025 14:29:01 +0000 (0:00:00.420) 0:00:00.420 ********* 2025-05-14 14:29:01.389348 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 14:29:01.389603 | orchestrator | 2025-05-14 14:29:01.390288 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:29:01.390922 | orchestrator | Wednesday 14 May 2025 14:29:01 +0000 (0:00:00.230) 0:00:00.651 ********* 2025-05-14 14:29:01.612324 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:01.612902 | orchestrator | 2025-05-14 14:29:01.613869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:01.614745 | orchestrator | Wednesday 14 May 2025 14:29:01 +0000 (0:00:00.225) 0:00:00.876 ********* 2025-05-14 14:29:02.316708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 14:29:02.317251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 14:29:02.317752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 14:29:02.318605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 14:29:02.319637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 14:29:02.319933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 14:29:02.320713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 14:29:02.321586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 14:29:02.321749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 14:29:02.322800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 14:29:02.323370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 14:29:02.323777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 14:29:02.324250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 14:29:02.324645 | orchestrator | 2025-05-14 14:29:02.325125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:02.325536 | orchestrator | Wednesday 14 May 2025 14:29:02 +0000 (0:00:00.703) 0:00:01.580 ********* 2025-05-14 14:29:02.513855 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:02.514351 | orchestrator | 2025-05-14 14:29:02.516944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:02.516978 | orchestrator | Wednesday 14 May 2025 14:29:02 +0000 (0:00:00.198) 0:00:01.778 ********* 2025-05-14 14:29:02.711979 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:02.712431 | orchestrator | 2025-05-14 14:29:02.713144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:02.713973 | orchestrator | Wednesday 14 May 2025 14:29:02 +0000 (0:00:00.197) 0:00:01.976 ********* 2025-05-14 14:29:02.913725 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:02.914623 | orchestrator | 2025-05-14 14:29:02.915210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:02.916329 | orchestrator | Wednesday 14 May 2025 14:29:02 +0000 (0:00:00.199) 0:00:02.176 ********* 2025-05-14 14:29:03.126997 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:03.131126 | orchestrator | 2025-05-14 14:29:03.132180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:03.136268 | orchestrator | Wednesday 14 May 2025 14:29:03 +0000 (0:00:00.215) 0:00:02.391 ********* 2025-05-14 14:29:03.313959 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:03.314143 | orchestrator | 2025-05-14 14:29:03.314775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:03.315590 | orchestrator | Wednesday 14 May 2025 14:29:03 +0000 (0:00:00.186) 0:00:02.578 ********* 2025-05-14 14:29:03.508211 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:03.508615 | orchestrator | 2025-05-14 14:29:03.509342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:03.510060 | orchestrator | Wednesday 14 May 2025 14:29:03 +0000 (0:00:00.194) 0:00:02.772 ********* 2025-05-14 14:29:03.698749 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:03.698952 | orchestrator | 2025-05-14 14:29:03.699802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:03.700761 | orchestrator | Wednesday 14 May 2025 14:29:03 +0000 (0:00:00.190) 0:00:02.963 ********* 2025-05-14 14:29:03.896333 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:03.899674 | orchestrator | 2025-05-14 14:29:03.899707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:03.899721 | orchestrator | Wednesday 14 May 2025 14:29:03 +0000 (0:00:00.196) 0:00:03.159 ********* 2025-05-14 14:29:04.522948 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580) 2025-05-14 14:29:04.523678 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580) 2025-05-14 14:29:04.524305 | orchestrator | 2025-05-14 14:29:04.524622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:04.526948 | orchestrator | Wednesday 14 May 2025 14:29:04 +0000 (0:00:00.626) 0:00:03.786 ********* 2025-05-14 14:29:05.319806 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4) 2025-05-14 14:29:05.319923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4) 2025-05-14 14:29:05.320418 | orchestrator | 2025-05-14 14:29:05.321639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:05.321785 | orchestrator | Wednesday 14 May 2025 14:29:05 +0000 (0:00:00.796) 0:00:04.582 ********* 2025-05-14 14:29:05.739701 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e) 2025-05-14 14:29:05.739857 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e) 2025-05-14 14:29:05.740778 | orchestrator | 2025-05-14 14:29:05.743367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:05.744078 | orchestrator | Wednesday 14 May 2025 14:29:05 +0000 (0:00:00.419) 0:00:05.002 ********* 2025-05-14 14:29:06.163288 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4) 2025-05-14 14:29:06.164010 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4) 2025-05-14 14:29:06.164714 | orchestrator | 2025-05-14 14:29:06.166099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:06.166573 | orchestrator | Wednesday 14 May 2025 14:29:06 +0000 (0:00:00.424) 0:00:05.426 ********* 2025-05-14 14:29:06.502277 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:29:06.503857 | orchestrator | 2025-05-14 14:29:06.504084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:06.505409 | orchestrator | Wednesday 14 May 2025 14:29:06 +0000 (0:00:00.339) 0:00:05.766 ********* 2025-05-14 14:29:06.953979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 14:29:06.955265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 14:29:06.955615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 14:29:06.958392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 14:29:06.958937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 14:29:06.959343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 14:29:06.959634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 14:29:06.960305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 14:29:06.960644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 14:29:06.961123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 14:29:06.961601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 14:29:06.962146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 14:29:06.962436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 14:29:06.963153 | orchestrator | 2025-05-14 14:29:06.963363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:06.963807 | orchestrator | Wednesday 14 May 2025 14:29:06 +0000 (0:00:00.449) 0:00:06.216 ********* 2025-05-14 14:29:07.151059 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:07.151502 | orchestrator | 2025-05-14 14:29:07.152617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:07.156341 | orchestrator | Wednesday 14 May 2025 14:29:07 +0000 (0:00:00.198) 0:00:06.415 ********* 2025-05-14 14:29:07.339753 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:07.340080 | orchestrator | 2025-05-14 14:29:07.340849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:07.341817 | orchestrator | Wednesday 14 May 2025 14:29:07 +0000 (0:00:00.188) 0:00:06.604 ********* 2025-05-14 14:29:07.533877 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:07.534090 | orchestrator | 2025-05-14 14:29:07.534856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:07.535374 | orchestrator | Wednesday 14 May 2025 14:29:07 +0000 (0:00:00.194) 0:00:06.798 ********* 2025-05-14 14:29:07.730749 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:07.731073 | orchestrator | 2025-05-14 14:29:07.731775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:07.732655 | orchestrator | Wednesday 14 May 2025 14:29:07 +0000 (0:00:00.196) 0:00:06.994 ********* 2025-05-14 14:29:08.304157 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:08.304314 | orchestrator | 2025-05-14 14:29:08.304964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:08.306766 | orchestrator | Wednesday 14 May 2025 14:29:08 +0000 (0:00:00.571) 0:00:07.566 ********* 2025-05-14 14:29:08.490995 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:08.491315 | orchestrator | 2025-05-14 14:29:08.492361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:08.493264 | orchestrator | Wednesday 14 May 2025 14:29:08 +0000 (0:00:00.189) 0:00:07.756 ********* 2025-05-14 14:29:08.698254 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:08.699245 | orchestrator | 2025-05-14 14:29:08.700339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:08.701138 | orchestrator | Wednesday 14 May 2025 14:29:08 +0000 (0:00:00.206) 0:00:07.962 ********* 2025-05-14 14:29:08.895182 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:08.895354 | orchestrator | 2025-05-14 14:29:08.896305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:08.900121 | orchestrator | Wednesday 14 May 2025 14:29:08 +0000 (0:00:00.196) 0:00:08.158 ********* 2025-05-14 14:29:09.547032 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 14:29:09.547920 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 14:29:09.548279 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 14:29:09.548869 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 14:29:09.549801 | orchestrator | 2025-05-14 14:29:09.552280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:09.552675 | orchestrator | Wednesday 14 May 2025 14:29:09 +0000 (0:00:00.652) 0:00:08.811 ********* 2025-05-14 14:29:09.743572 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:09.743665 | orchestrator | 2025-05-14 14:29:09.744407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:09.745179 | orchestrator | Wednesday 14 May 2025 14:29:09 +0000 (0:00:00.195) 0:00:09.006 ********* 2025-05-14 14:29:09.933696 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:09.936596 | orchestrator | 2025-05-14 14:29:09.936621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:09.936634 | orchestrator | Wednesday 14 May 2025 14:29:09 +0000 (0:00:00.189) 0:00:09.195 ********* 2025-05-14 14:29:10.132537 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:10.132647 | orchestrator | 2025-05-14 14:29:10.136619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:10.136735 | orchestrator | Wednesday 14 May 2025 14:29:10 +0000 (0:00:00.198) 0:00:09.394 ********* 2025-05-14 14:29:10.331193 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:10.331382 | orchestrator | 2025-05-14 14:29:10.331558 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 14:29:10.332548 | orchestrator | Wednesday 14 May 2025 14:29:10 +0000 (0:00:00.200) 0:00:09.595 ********* 2025-05-14 14:29:10.467700 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:10.467881 | orchestrator | 2025-05-14 14:29:10.468548 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 14:29:10.469146 | orchestrator | Wednesday 14 May 2025 14:29:10 +0000 (0:00:00.136) 0:00:09.732 ********* 2025-05-14 14:29:10.670955 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}}) 2025-05-14 14:29:10.671616 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46afb65a-1642-5955-80d8-115babed40cc'}}) 2025-05-14 14:29:10.671838 | orchestrator | 2025-05-14 14:29:10.672460 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 14:29:10.673303 | orchestrator | Wednesday 14 May 2025 14:29:10 +0000 (0:00:00.202) 0:00:09.934 ********* 2025-05-14 14:29:12.906575 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}) 2025-05-14 14:29:12.907036 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'}) 2025-05-14 14:29:12.908681 | orchestrator | 2025-05-14 14:29:12.910945 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 14:29:12.911847 | orchestrator | Wednesday 14 May 2025 14:29:12 +0000 (0:00:02.234) 0:00:12.169 ********* 2025-05-14 14:29:13.096935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:13.097367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:13.100147 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:13.102643 | orchestrator | 2025-05-14 14:29:13.103060 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 14:29:13.103740 | orchestrator | Wednesday 14 May 2025 14:29:13 +0000 (0:00:00.190) 0:00:12.359 ********* 2025-05-14 14:29:14.601950 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}) 2025-05-14 14:29:14.602195 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'}) 2025-05-14 14:29:14.603186 | orchestrator | 2025-05-14 14:29:14.604704 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 14:29:14.605516 | orchestrator | Wednesday 14 May 2025 14:29:14 +0000 (0:00:01.504) 0:00:13.864 ********* 2025-05-14 14:29:14.774964 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:14.775978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:14.776542 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:14.776884 | orchestrator | 2025-05-14 14:29:14.777352 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 14:29:14.778122 | orchestrator | Wednesday 14 May 2025 14:29:14 +0000 (0:00:00.174) 0:00:14.039 ********* 2025-05-14 14:29:14.919684 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:14.920072 | orchestrator | 2025-05-14 14:29:14.920341 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 14:29:14.920898 | orchestrator | Wednesday 14 May 2025 14:29:14 +0000 (0:00:00.144) 0:00:14.184 ********* 2025-05-14 14:29:15.088480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:15.088654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:15.089760 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:15.090494 | orchestrator | 2025-05-14 14:29:15.091581 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 14:29:15.092216 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.168) 0:00:14.353 ********* 2025-05-14 14:29:15.233077 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:15.235284 | orchestrator | 2025-05-14 14:29:15.235670 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 14:29:15.236559 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.143) 0:00:14.497 ********* 2025-05-14 14:29:15.401146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:15.402183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:15.403011 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:15.403934 | orchestrator | 2025-05-14 14:29:15.404948 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 14:29:15.405488 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.166) 0:00:14.663 ********* 2025-05-14 14:29:15.692573 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:15.692675 | orchestrator | 2025-05-14 14:29:15.692690 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 14:29:15.692703 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.290) 0:00:14.954 ********* 2025-05-14 14:29:15.853335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:15.853602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:15.854802 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:15.855066 | orchestrator | 2025-05-14 14:29:15.855738 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 14:29:15.856485 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.159) 0:00:15.113 ********* 2025-05-14 14:29:15.986251 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:15.986414 | orchestrator | 2025-05-14 14:29:15.987418 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 14:29:15.988528 | orchestrator | Wednesday 14 May 2025 14:29:15 +0000 (0:00:00.136) 0:00:15.249 ********* 2025-05-14 14:29:16.164793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:16.164992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:16.165691 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.166763 | orchestrator | 2025-05-14 14:29:16.167545 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 14:29:16.167887 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.179) 0:00:15.429 ********* 2025-05-14 14:29:16.328965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:16.329324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:16.329774 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.329797 | orchestrator | 2025-05-14 14:29:16.330642 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 14:29:16.333359 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.162) 0:00:15.592 ********* 2025-05-14 14:29:16.501562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:16.501668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:16.501683 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.502300 | orchestrator | 2025-05-14 14:29:16.502326 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 14:29:16.502340 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.163) 0:00:15.755 ********* 2025-05-14 14:29:16.638314 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.638718 | orchestrator | 2025-05-14 14:29:16.639573 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 14:29:16.643111 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.145) 0:00:15.901 ********* 2025-05-14 14:29:16.788921 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.790421 | orchestrator | 2025-05-14 14:29:16.791685 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 14:29:16.792761 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.148) 0:00:16.050 ********* 2025-05-14 14:29:16.914908 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:16.918845 | orchestrator | 2025-05-14 14:29:16.920674 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 14:29:16.920719 | orchestrator | Wednesday 14 May 2025 14:29:16 +0000 (0:00:00.127) 0:00:16.177 ********* 2025-05-14 14:29:17.057511 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:29:17.057683 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 14:29:17.061388 | orchestrator | } 2025-05-14 14:29:17.061473 | orchestrator | 2025-05-14 14:29:17.061611 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 14:29:17.062818 | orchestrator | Wednesday 14 May 2025 14:29:17 +0000 (0:00:00.142) 0:00:16.320 ********* 2025-05-14 14:29:17.207840 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:29:17.208337 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 14:29:17.209385 | orchestrator | } 2025-05-14 14:29:17.210281 | orchestrator | 2025-05-14 14:29:17.211422 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 14:29:17.211932 | orchestrator | Wednesday 14 May 2025 14:29:17 +0000 (0:00:00.151) 0:00:16.471 ********* 2025-05-14 14:29:17.376000 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:29:17.376677 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 14:29:17.377076 | orchestrator | } 2025-05-14 14:29:17.377587 | orchestrator | 2025-05-14 14:29:17.380666 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 14:29:17.381095 | orchestrator | Wednesday 14 May 2025 14:29:17 +0000 (0:00:00.167) 0:00:16.639 ********* 2025-05-14 14:29:18.372495 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:18.372747 | orchestrator | 2025-05-14 14:29:18.373782 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 14:29:18.374672 | orchestrator | Wednesday 14 May 2025 14:29:18 +0000 (0:00:00.996) 0:00:17.635 ********* 2025-05-14 14:29:18.899358 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:18.899823 | orchestrator | 2025-05-14 14:29:18.900563 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 14:29:18.901157 | orchestrator | Wednesday 14 May 2025 14:29:18 +0000 (0:00:00.527) 0:00:18.163 ********* 2025-05-14 14:29:19.421857 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:19.422557 | orchestrator | 2025-05-14 14:29:19.422849 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 14:29:19.423408 | orchestrator | Wednesday 14 May 2025 14:29:19 +0000 (0:00:00.522) 0:00:18.685 ********* 2025-05-14 14:29:19.568984 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:19.569157 | orchestrator | 2025-05-14 14:29:19.569662 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 14:29:19.570400 | orchestrator | Wednesday 14 May 2025 14:29:19 +0000 (0:00:00.147) 0:00:18.833 ********* 2025-05-14 14:29:19.682232 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:19.682631 | orchestrator | 2025-05-14 14:29:19.683075 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 14:29:19.683723 | orchestrator | Wednesday 14 May 2025 14:29:19 +0000 (0:00:00.112) 0:00:18.946 ********* 2025-05-14 14:29:19.792246 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:19.792630 | orchestrator | 2025-05-14 14:29:19.793085 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 14:29:19.793762 | orchestrator | Wednesday 14 May 2025 14:29:19 +0000 (0:00:00.110) 0:00:19.057 ********* 2025-05-14 14:29:19.932060 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:29:19.932137 | orchestrator |  "vgs_report": { 2025-05-14 14:29:19.933236 | orchestrator |  "vg": [] 2025-05-14 14:29:19.933921 | orchestrator |  } 2025-05-14 14:29:19.934568 | orchestrator | } 2025-05-14 14:29:19.937060 | orchestrator | 2025-05-14 14:29:19.937087 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 14:29:19.937099 | orchestrator | Wednesday 14 May 2025 14:29:19 +0000 (0:00:00.139) 0:00:19.196 ********* 2025-05-14 14:29:20.072649 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.072705 | orchestrator | 2025-05-14 14:29:20.073239 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 14:29:20.075551 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.138) 0:00:19.335 ********* 2025-05-14 14:29:20.207326 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.207393 | orchestrator | 2025-05-14 14:29:20.207506 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 14:29:20.207832 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.135) 0:00:19.471 ********* 2025-05-14 14:29:20.347754 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.348231 | orchestrator | 2025-05-14 14:29:20.348706 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 14:29:20.349528 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.140) 0:00:19.611 ********* 2025-05-14 14:29:20.478195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.478402 | orchestrator | 2025-05-14 14:29:20.478690 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 14:29:20.478720 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.130) 0:00:19.742 ********* 2025-05-14 14:29:20.782501 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.782612 | orchestrator | 2025-05-14 14:29:20.782691 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 14:29:20.782707 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.303) 0:00:20.046 ********* 2025-05-14 14:29:20.925482 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:20.926613 | orchestrator | 2025-05-14 14:29:20.927192 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 14:29:20.927388 | orchestrator | Wednesday 14 May 2025 14:29:20 +0000 (0:00:00.143) 0:00:20.189 ********* 2025-05-14 14:29:21.067371 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.067639 | orchestrator | 2025-05-14 14:29:21.067938 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 14:29:21.068423 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.141) 0:00:20.330 ********* 2025-05-14 14:29:21.229306 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.230427 | orchestrator | 2025-05-14 14:29:21.231195 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 14:29:21.231919 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.162) 0:00:20.493 ********* 2025-05-14 14:29:21.383960 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.384151 | orchestrator | 2025-05-14 14:29:21.384707 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 14:29:21.385638 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.154) 0:00:20.647 ********* 2025-05-14 14:29:21.526267 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.526746 | orchestrator | 2025-05-14 14:29:21.527497 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 14:29:21.528298 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.143) 0:00:20.790 ********* 2025-05-14 14:29:21.667998 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.668189 | orchestrator | 2025-05-14 14:29:21.668213 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 14:29:21.668899 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.141) 0:00:20.932 ********* 2025-05-14 14:29:21.809758 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.809861 | orchestrator | 2025-05-14 14:29:21.809876 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 14:29:21.809889 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.136) 0:00:21.069 ********* 2025-05-14 14:29:21.937643 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:21.937737 | orchestrator | 2025-05-14 14:29:21.939088 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 14:29:21.941939 | orchestrator | Wednesday 14 May 2025 14:29:21 +0000 (0:00:00.131) 0:00:21.201 ********* 2025-05-14 14:29:22.065034 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:22.065692 | orchestrator | 2025-05-14 14:29:22.069669 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 14:29:22.070332 | orchestrator | Wednesday 14 May 2025 14:29:22 +0000 (0:00:00.128) 0:00:21.329 ********* 2025-05-14 14:29:22.250375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:22.250615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:22.251487 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:22.252236 | orchestrator | 2025-05-14 14:29:22.252587 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 14:29:22.255587 | orchestrator | Wednesday 14 May 2025 14:29:22 +0000 (0:00:00.185) 0:00:21.514 ********* 2025-05-14 14:29:22.419982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:22.420174 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:22.420781 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:22.421560 | orchestrator | 2025-05-14 14:29:22.422620 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 14:29:22.424834 | orchestrator | Wednesday 14 May 2025 14:29:22 +0000 (0:00:00.168) 0:00:21.683 ********* 2025-05-14 14:29:22.786513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:22.787547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:22.788990 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:22.789744 | orchestrator | 2025-05-14 14:29:22.791915 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 14:29:22.791943 | orchestrator | Wednesday 14 May 2025 14:29:22 +0000 (0:00:00.367) 0:00:22.051 ********* 2025-05-14 14:29:22.938916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:22.939924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:22.941105 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:22.942589 | orchestrator | 2025-05-14 14:29:22.943215 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 14:29:22.943546 | orchestrator | Wednesday 14 May 2025 14:29:22 +0000 (0:00:00.151) 0:00:22.203 ********* 2025-05-14 14:29:23.110137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:23.111377 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:23.112464 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:23.113525 | orchestrator | 2025-05-14 14:29:23.114767 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 14:29:23.115997 | orchestrator | Wednesday 14 May 2025 14:29:23 +0000 (0:00:00.170) 0:00:22.373 ********* 2025-05-14 14:29:23.281468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:23.281655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:23.281968 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:23.282929 | orchestrator | 2025-05-14 14:29:23.284046 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 14:29:23.286189 | orchestrator | Wednesday 14 May 2025 14:29:23 +0000 (0:00:00.172) 0:00:22.545 ********* 2025-05-14 14:29:23.448497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:23.449087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:23.450107 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:23.451160 | orchestrator | 2025-05-14 14:29:23.453173 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 14:29:23.453197 | orchestrator | Wednesday 14 May 2025 14:29:23 +0000 (0:00:00.167) 0:00:22.713 ********* 2025-05-14 14:29:23.592819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:23.593116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:23.593833 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:23.595118 | orchestrator | 2025-05-14 14:29:23.598058 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 14:29:23.599072 | orchestrator | Wednesday 14 May 2025 14:29:23 +0000 (0:00:00.144) 0:00:22.857 ********* 2025-05-14 14:29:24.089610 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:24.090198 | orchestrator | 2025-05-14 14:29:24.091360 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 14:29:24.092565 | orchestrator | Wednesday 14 May 2025 14:29:24 +0000 (0:00:00.496) 0:00:23.353 ********* 2025-05-14 14:29:24.601744 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:24.602602 | orchestrator | 2025-05-14 14:29:24.603630 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 14:29:24.606286 | orchestrator | Wednesday 14 May 2025 14:29:24 +0000 (0:00:00.512) 0:00:23.865 ********* 2025-05-14 14:29:24.754586 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:29:24.755691 | orchestrator | 2025-05-14 14:29:24.756548 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 14:29:24.756807 | orchestrator | Wednesday 14 May 2025 14:29:24 +0000 (0:00:00.152) 0:00:24.018 ********* 2025-05-14 14:29:24.938199 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'vg_name': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'}) 2025-05-14 14:29:24.939357 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'vg_name': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}) 2025-05-14 14:29:24.940068 | orchestrator | 2025-05-14 14:29:24.941672 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 14:29:24.944706 | orchestrator | Wednesday 14 May 2025 14:29:24 +0000 (0:00:00.183) 0:00:24.202 ********* 2025-05-14 14:29:25.315339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:25.316120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:25.317645 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:25.318632 | orchestrator | 2025-05-14 14:29:25.320283 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 14:29:25.320510 | orchestrator | Wednesday 14 May 2025 14:29:25 +0000 (0:00:00.375) 0:00:24.577 ********* 2025-05-14 14:29:25.483390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:25.484362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:25.484627 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:25.485870 | orchestrator | 2025-05-14 14:29:25.486966 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 14:29:25.487821 | orchestrator | Wednesday 14 May 2025 14:29:25 +0000 (0:00:00.169) 0:00:24.747 ********* 2025-05-14 14:29:25.656607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'})  2025-05-14 14:29:25.656814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'})  2025-05-14 14:29:25.658505 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:29:25.659532 | orchestrator | 2025-05-14 14:29:25.660750 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 14:29:25.661558 | orchestrator | Wednesday 14 May 2025 14:29:25 +0000 (0:00:00.173) 0:00:24.920 ********* 2025-05-14 14:29:26.324921 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:29:26.325613 | orchestrator |  "lvm_report": { 2025-05-14 14:29:26.326345 | orchestrator |  "lv": [ 2025-05-14 14:29:26.327817 | orchestrator |  { 2025-05-14 14:29:26.328324 | orchestrator |  "lv_name": "osd-block-46afb65a-1642-5955-80d8-115babed40cc", 2025-05-14 14:29:26.329615 | orchestrator |  "vg_name": "ceph-46afb65a-1642-5955-80d8-115babed40cc" 2025-05-14 14:29:26.330238 | orchestrator |  }, 2025-05-14 14:29:26.331185 | orchestrator |  { 2025-05-14 14:29:26.331766 | orchestrator |  "lv_name": "osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd", 2025-05-14 14:29:26.333167 | orchestrator |  "vg_name": "ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd" 2025-05-14 14:29:26.334360 | orchestrator |  } 2025-05-14 14:29:26.334770 | orchestrator |  ], 2025-05-14 14:29:26.335709 | orchestrator |  "pv": [ 2025-05-14 14:29:26.336858 | orchestrator |  { 2025-05-14 14:29:26.337507 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 14:29:26.338646 | orchestrator |  "vg_name": "ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd" 2025-05-14 14:29:26.339661 | orchestrator |  }, 2025-05-14 14:29:26.339706 | orchestrator |  { 2025-05-14 14:29:26.340044 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 14:29:26.340848 | orchestrator |  "vg_name": "ceph-46afb65a-1642-5955-80d8-115babed40cc" 2025-05-14 14:29:26.340995 | orchestrator |  } 2025-05-14 14:29:26.341523 | orchestrator |  ] 2025-05-14 14:29:26.341957 | orchestrator |  } 2025-05-14 14:29:26.342934 | orchestrator | } 2025-05-14 14:29:26.343790 | orchestrator | 2025-05-14 14:29:26.344009 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 14:29:26.344679 | orchestrator | 2025-05-14 14:29:26.345106 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:29:26.345594 | orchestrator | Wednesday 14 May 2025 14:29:26 +0000 (0:00:00.665) 0:00:25.586 ********* 2025-05-14 14:29:26.924025 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 14:29:26.924199 | orchestrator | 2025-05-14 14:29:26.925295 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:29:26.925974 | orchestrator | Wednesday 14 May 2025 14:29:26 +0000 (0:00:00.601) 0:00:26.187 ********* 2025-05-14 14:29:27.161711 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:27.162241 | orchestrator | 2025-05-14 14:29:27.164656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:27.170237 | orchestrator | Wednesday 14 May 2025 14:29:27 +0000 (0:00:00.237) 0:00:26.425 ********* 2025-05-14 14:29:27.625304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 14:29:27.627975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 14:29:27.628020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 14:29:27.628787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 14:29:27.629609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 14:29:27.630477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 14:29:27.630938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 14:29:27.631903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 14:29:27.632509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 14:29:27.633031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 14:29:27.633546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 14:29:27.633968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 14:29:27.634602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 14:29:27.634908 | orchestrator | 2025-05-14 14:29:27.635275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:27.636035 | orchestrator | Wednesday 14 May 2025 14:29:27 +0000 (0:00:00.462) 0:00:26.887 ********* 2025-05-14 14:29:27.823957 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:27.825196 | orchestrator | 2025-05-14 14:29:27.825384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:27.826364 | orchestrator | Wednesday 14 May 2025 14:29:27 +0000 (0:00:00.200) 0:00:27.088 ********* 2025-05-14 14:29:28.010520 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.011106 | orchestrator | 2025-05-14 14:29:28.012053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.013234 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.185) 0:00:27.274 ********* 2025-05-14 14:29:28.210637 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.211009 | orchestrator | 2025-05-14 14:29:28.216561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.216601 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.198) 0:00:27.472 ********* 2025-05-14 14:29:28.405768 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.406770 | orchestrator | 2025-05-14 14:29:28.407074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.408956 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.197) 0:00:27.670 ********* 2025-05-14 14:29:28.601335 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.602156 | orchestrator | 2025-05-14 14:29:28.603018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.604028 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.194) 0:00:27.865 ********* 2025-05-14 14:29:28.787671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.788498 | orchestrator | 2025-05-14 14:29:28.789689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.790274 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.186) 0:00:28.051 ********* 2025-05-14 14:29:28.988877 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:28.989260 | orchestrator | 2025-05-14 14:29:28.990462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:28.991336 | orchestrator | Wednesday 14 May 2025 14:29:28 +0000 (0:00:00.201) 0:00:28.253 ********* 2025-05-14 14:29:29.477651 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:29.478535 | orchestrator | 2025-05-14 14:29:29.479392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:29.480311 | orchestrator | Wednesday 14 May 2025 14:29:29 +0000 (0:00:00.487) 0:00:28.740 ********* 2025-05-14 14:29:29.904387 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8) 2025-05-14 14:29:29.904642 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8) 2025-05-14 14:29:29.905620 | orchestrator | 2025-05-14 14:29:29.906411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:29.907290 | orchestrator | Wednesday 14 May 2025 14:29:29 +0000 (0:00:00.428) 0:00:29.168 ********* 2025-05-14 14:29:30.358294 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1) 2025-05-14 14:29:30.358732 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1) 2025-05-14 14:29:30.359661 | orchestrator | 2025-05-14 14:29:30.362141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:30.362166 | orchestrator | Wednesday 14 May 2025 14:29:30 +0000 (0:00:00.452) 0:00:29.621 ********* 2025-05-14 14:29:30.789053 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728) 2025-05-14 14:29:30.789223 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728) 2025-05-14 14:29:30.790287 | orchestrator | 2025-05-14 14:29:30.790622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:30.793162 | orchestrator | Wednesday 14 May 2025 14:29:30 +0000 (0:00:00.431) 0:00:30.052 ********* 2025-05-14 14:29:31.229603 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194) 2025-05-14 14:29:31.229709 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194) 2025-05-14 14:29:31.230711 | orchestrator | 2025-05-14 14:29:31.233866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:31.233900 | orchestrator | Wednesday 14 May 2025 14:29:31 +0000 (0:00:00.439) 0:00:30.492 ********* 2025-05-14 14:29:31.576754 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:29:31.576997 | orchestrator | 2025-05-14 14:29:31.577020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:31.577510 | orchestrator | Wednesday 14 May 2025 14:29:31 +0000 (0:00:00.347) 0:00:30.839 ********* 2025-05-14 14:29:32.034875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 14:29:32.035510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 14:29:32.035799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 14:29:32.037077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 14:29:32.037912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 14:29:32.039721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 14:29:32.040124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 14:29:32.041362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 14:29:32.042117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 14:29:32.042482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 14:29:32.043316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 14:29:32.043723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 14:29:32.044158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 14:29:32.044591 | orchestrator | 2025-05-14 14:29:32.045086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:32.045642 | orchestrator | Wednesday 14 May 2025 14:29:32 +0000 (0:00:00.459) 0:00:31.299 ********* 2025-05-14 14:29:32.241379 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:32.242587 | orchestrator | 2025-05-14 14:29:32.243414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:32.244445 | orchestrator | Wednesday 14 May 2025 14:29:32 +0000 (0:00:00.206) 0:00:31.505 ********* 2025-05-14 14:29:32.439351 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:32.440008 | orchestrator | 2025-05-14 14:29:32.440738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:32.441457 | orchestrator | Wednesday 14 May 2025 14:29:32 +0000 (0:00:00.198) 0:00:31.704 ********* 2025-05-14 14:29:32.965251 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:32.966272 | orchestrator | 2025-05-14 14:29:32.967168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:32.970109 | orchestrator | Wednesday 14 May 2025 14:29:32 +0000 (0:00:00.525) 0:00:32.229 ********* 2025-05-14 14:29:33.173517 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:33.173664 | orchestrator | 2025-05-14 14:29:33.180121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:33.183460 | orchestrator | Wednesday 14 May 2025 14:29:33 +0000 (0:00:00.207) 0:00:32.437 ********* 2025-05-14 14:29:33.403815 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:33.404564 | orchestrator | 2025-05-14 14:29:33.405082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:33.407536 | orchestrator | Wednesday 14 May 2025 14:29:33 +0000 (0:00:00.229) 0:00:32.667 ********* 2025-05-14 14:29:33.608891 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:33.608995 | orchestrator | 2025-05-14 14:29:33.609689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:33.610297 | orchestrator | Wednesday 14 May 2025 14:29:33 +0000 (0:00:00.205) 0:00:32.872 ********* 2025-05-14 14:29:33.819207 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:33.819354 | orchestrator | 2025-05-14 14:29:33.820122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:33.820821 | orchestrator | Wednesday 14 May 2025 14:29:33 +0000 (0:00:00.210) 0:00:33.083 ********* 2025-05-14 14:29:34.013000 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:34.013165 | orchestrator | 2025-05-14 14:29:34.013183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:34.013464 | orchestrator | Wednesday 14 May 2025 14:29:34 +0000 (0:00:00.194) 0:00:33.277 ********* 2025-05-14 14:29:34.667360 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 14:29:34.667520 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 14:29:34.668270 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 14:29:34.671187 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 14:29:34.671415 | orchestrator | 2025-05-14 14:29:34.673090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:34.676276 | orchestrator | Wednesday 14 May 2025 14:29:34 +0000 (0:00:00.652) 0:00:33.930 ********* 2025-05-14 14:29:34.887477 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:34.887583 | orchestrator | 2025-05-14 14:29:34.887733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:34.888412 | orchestrator | Wednesday 14 May 2025 14:29:34 +0000 (0:00:00.222) 0:00:34.152 ********* 2025-05-14 14:29:35.136742 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:35.137016 | orchestrator | 2025-05-14 14:29:35.137828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:35.138133 | orchestrator | Wednesday 14 May 2025 14:29:35 +0000 (0:00:00.249) 0:00:34.401 ********* 2025-05-14 14:29:35.335946 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:35.336840 | orchestrator | 2025-05-14 14:29:35.337329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:35.338246 | orchestrator | Wednesday 14 May 2025 14:29:35 +0000 (0:00:00.197) 0:00:34.599 ********* 2025-05-14 14:29:35.987271 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:35.987635 | orchestrator | 2025-05-14 14:29:35.988665 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 14:29:35.989592 | orchestrator | Wednesday 14 May 2025 14:29:35 +0000 (0:00:00.652) 0:00:35.251 ********* 2025-05-14 14:29:36.134288 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:36.135370 | orchestrator | 2025-05-14 14:29:36.138508 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 14:29:36.138850 | orchestrator | Wednesday 14 May 2025 14:29:36 +0000 (0:00:00.145) 0:00:35.397 ********* 2025-05-14 14:29:36.343605 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}}) 2025-05-14 14:29:36.344020 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6248da54-4321-5f95-9f37-ef0f81563cc8'}}) 2025-05-14 14:29:36.349494 | orchestrator | 2025-05-14 14:29:36.349520 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 14:29:36.349533 | orchestrator | Wednesday 14 May 2025 14:29:36 +0000 (0:00:00.210) 0:00:35.607 ********* 2025-05-14 14:29:38.302143 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}) 2025-05-14 14:29:38.305517 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'}) 2025-05-14 14:29:38.306253 | orchestrator | 2025-05-14 14:29:38.306811 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 14:29:38.307684 | orchestrator | Wednesday 14 May 2025 14:29:38 +0000 (0:00:01.958) 0:00:37.566 ********* 2025-05-14 14:29:38.451574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:38.451658 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:38.454607 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:38.455137 | orchestrator | 2025-05-14 14:29:38.456018 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 14:29:38.457014 | orchestrator | Wednesday 14 May 2025 14:29:38 +0000 (0:00:00.149) 0:00:37.715 ********* 2025-05-14 14:29:39.803087 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}) 2025-05-14 14:29:39.804182 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'}) 2025-05-14 14:29:39.804662 | orchestrator | 2025-05-14 14:29:39.808238 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 14:29:39.808690 | orchestrator | Wednesday 14 May 2025 14:29:39 +0000 (0:00:01.352) 0:00:39.067 ********* 2025-05-14 14:29:39.953576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:39.953874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:39.954495 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:39.954956 | orchestrator | 2025-05-14 14:29:39.955537 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 14:29:39.961064 | orchestrator | Wednesday 14 May 2025 14:29:39 +0000 (0:00:00.151) 0:00:39.219 ********* 2025-05-14 14:29:40.084925 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.085516 | orchestrator | 2025-05-14 14:29:40.088753 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 14:29:40.089129 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.130) 0:00:39.349 ********* 2025-05-14 14:29:40.243361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:40.243475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:40.243490 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.243502 | orchestrator | 2025-05-14 14:29:40.243513 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 14:29:40.243525 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.156) 0:00:39.506 ********* 2025-05-14 14:29:40.466374 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.468100 | orchestrator | 2025-05-14 14:29:40.469281 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 14:29:40.470476 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.224) 0:00:39.730 ********* 2025-05-14 14:29:40.607508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:40.608364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:40.609528 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.610846 | orchestrator | 2025-05-14 14:29:40.611910 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 14:29:40.612736 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.141) 0:00:39.872 ********* 2025-05-14 14:29:40.736671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.739543 | orchestrator | 2025-05-14 14:29:40.739578 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 14:29:40.739881 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.129) 0:00:40.001 ********* 2025-05-14 14:29:40.884696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:40.884768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:40.886252 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:40.887398 | orchestrator | 2025-05-14 14:29:40.888889 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 14:29:40.889382 | orchestrator | Wednesday 14 May 2025 14:29:40 +0000 (0:00:00.147) 0:00:40.148 ********* 2025-05-14 14:29:41.014475 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:41.014561 | orchestrator | 2025-05-14 14:29:41.015196 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 14:29:41.016297 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.128) 0:00:40.276 ********* 2025-05-14 14:29:41.162209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:41.164065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:41.165309 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.167113 | orchestrator | 2025-05-14 14:29:41.168008 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 14:29:41.168848 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.150) 0:00:40.427 ********* 2025-05-14 14:29:41.314150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:41.315055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:41.315863 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.316697 | orchestrator | 2025-05-14 14:29:41.317397 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 14:29:41.318174 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.151) 0:00:40.578 ********* 2025-05-14 14:29:41.467526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:41.470352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:41.470395 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.470407 | orchestrator | 2025-05-14 14:29:41.471476 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 14:29:41.472125 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.152) 0:00:40.731 ********* 2025-05-14 14:29:41.600910 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.601975 | orchestrator | 2025-05-14 14:29:41.602551 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 14:29:41.605225 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.135) 0:00:40.866 ********* 2025-05-14 14:29:41.727712 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.728360 | orchestrator | 2025-05-14 14:29:41.728928 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 14:29:41.729662 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.126) 0:00:40.992 ********* 2025-05-14 14:29:41.845752 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:41.846415 | orchestrator | 2025-05-14 14:29:41.847049 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 14:29:41.850273 | orchestrator | Wednesday 14 May 2025 14:29:41 +0000 (0:00:00.117) 0:00:41.110 ********* 2025-05-14 14:29:42.130190 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:29:42.131010 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 14:29:42.131868 | orchestrator | } 2025-05-14 14:29:42.134350 | orchestrator | 2025-05-14 14:29:42.134488 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 14:29:42.135712 | orchestrator | Wednesday 14 May 2025 14:29:42 +0000 (0:00:00.282) 0:00:41.393 ********* 2025-05-14 14:29:42.251881 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:29:42.252134 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 14:29:42.257526 | orchestrator | } 2025-05-14 14:29:42.257563 | orchestrator | 2025-05-14 14:29:42.257576 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 14:29:42.257588 | orchestrator | Wednesday 14 May 2025 14:29:42 +0000 (0:00:00.123) 0:00:41.516 ********* 2025-05-14 14:29:42.371337 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:29:42.372071 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 14:29:42.372368 | orchestrator | } 2025-05-14 14:29:42.373501 | orchestrator | 2025-05-14 14:29:42.373760 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 14:29:42.374391 | orchestrator | Wednesday 14 May 2025 14:29:42 +0000 (0:00:00.120) 0:00:41.636 ********* 2025-05-14 14:29:42.909773 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:42.910763 | orchestrator | 2025-05-14 14:29:42.911195 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 14:29:42.912372 | orchestrator | Wednesday 14 May 2025 14:29:42 +0000 (0:00:00.536) 0:00:42.173 ********* 2025-05-14 14:29:43.416306 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:43.416811 | orchestrator | 2025-05-14 14:29:43.417656 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 14:29:43.418391 | orchestrator | Wednesday 14 May 2025 14:29:43 +0000 (0:00:00.507) 0:00:42.681 ********* 2025-05-14 14:29:43.905790 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:43.909686 | orchestrator | 2025-05-14 14:29:43.910326 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 14:29:43.911076 | orchestrator | Wednesday 14 May 2025 14:29:43 +0000 (0:00:00.488) 0:00:43.169 ********* 2025-05-14 14:29:44.033478 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:44.034325 | orchestrator | 2025-05-14 14:29:44.035062 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 14:29:44.035573 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.129) 0:00:43.298 ********* 2025-05-14 14:29:44.152778 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:44.154130 | orchestrator | 2025-05-14 14:29:44.158202 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 14:29:44.159038 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.118) 0:00:43.417 ********* 2025-05-14 14:29:44.255750 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:44.256121 | orchestrator | 2025-05-14 14:29:44.256626 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 14:29:44.259864 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.102) 0:00:43.520 ********* 2025-05-14 14:29:44.378276 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:29:44.378984 | orchestrator |  "vgs_report": { 2025-05-14 14:29:44.381545 | orchestrator |  "vg": [] 2025-05-14 14:29:44.381879 | orchestrator |  } 2025-05-14 14:29:44.382568 | orchestrator | } 2025-05-14 14:29:44.382952 | orchestrator | 2025-05-14 14:29:44.383474 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 14:29:44.383982 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.122) 0:00:43.643 ********* 2025-05-14 14:29:44.508777 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:44.511074 | orchestrator | 2025-05-14 14:29:44.511109 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 14:29:44.511121 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.130) 0:00:43.773 ********* 2025-05-14 14:29:44.757005 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:44.757233 | orchestrator | 2025-05-14 14:29:44.758341 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 14:29:44.759073 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.248) 0:00:44.021 ********* 2025-05-14 14:29:44.884604 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:44.884895 | orchestrator | 2025-05-14 14:29:44.885981 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 14:29:44.887158 | orchestrator | Wednesday 14 May 2025 14:29:44 +0000 (0:00:00.128) 0:00:44.149 ********* 2025-05-14 14:29:45.010385 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.010470 | orchestrator | 2025-05-14 14:29:45.011082 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 14:29:45.011991 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.125) 0:00:44.274 ********* 2025-05-14 14:29:45.136513 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.137504 | orchestrator | 2025-05-14 14:29:45.138169 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 14:29:45.138730 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.125) 0:00:44.400 ********* 2025-05-14 14:29:45.270698 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.272129 | orchestrator | 2025-05-14 14:29:45.273645 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 14:29:45.273668 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.135) 0:00:44.535 ********* 2025-05-14 14:29:45.397651 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.399147 | orchestrator | 2025-05-14 14:29:45.400018 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 14:29:45.400952 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.126) 0:00:44.662 ********* 2025-05-14 14:29:45.522408 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.522700 | orchestrator | 2025-05-14 14:29:45.525767 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 14:29:45.526723 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.124) 0:00:44.786 ********* 2025-05-14 14:29:45.637575 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.638528 | orchestrator | 2025-05-14 14:29:45.639034 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 14:29:45.640364 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.115) 0:00:44.902 ********* 2025-05-14 14:29:45.767243 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.767638 | orchestrator | 2025-05-14 14:29:45.768096 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 14:29:45.769032 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.130) 0:00:45.032 ********* 2025-05-14 14:29:45.898444 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:45.899316 | orchestrator | 2025-05-14 14:29:45.901171 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 14:29:45.901235 | orchestrator | Wednesday 14 May 2025 14:29:45 +0000 (0:00:00.130) 0:00:45.163 ********* 2025-05-14 14:29:46.028808 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.028974 | orchestrator | 2025-05-14 14:29:46.029657 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 14:29:46.029958 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.123) 0:00:45.286 ********* 2025-05-14 14:29:46.155508 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.155602 | orchestrator | 2025-05-14 14:29:46.156122 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 14:29:46.156150 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.132) 0:00:45.419 ********* 2025-05-14 14:29:46.395478 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.396036 | orchestrator | 2025-05-14 14:29:46.396714 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 14:29:46.399082 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.240) 0:00:45.660 ********* 2025-05-14 14:29:46.570863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:46.571118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:46.571147 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.571213 | orchestrator | 2025-05-14 14:29:46.571677 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 14:29:46.571928 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.174) 0:00:45.835 ********* 2025-05-14 14:29:46.747449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:46.747608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:46.748146 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.752663 | orchestrator | 2025-05-14 14:29:46.754983 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 14:29:46.756369 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.175) 0:00:46.010 ********* 2025-05-14 14:29:46.926515 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:46.926624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:46.926735 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:46.926960 | orchestrator | 2025-05-14 14:29:46.927094 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 14:29:46.928450 | orchestrator | Wednesday 14 May 2025 14:29:46 +0000 (0:00:00.179) 0:00:46.189 ********* 2025-05-14 14:29:47.086810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:47.087257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:47.088337 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:47.088787 | orchestrator | 2025-05-14 14:29:47.089871 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 14:29:47.090708 | orchestrator | Wednesday 14 May 2025 14:29:47 +0000 (0:00:00.161) 0:00:46.351 ********* 2025-05-14 14:29:47.256102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:47.256958 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:47.257762 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:47.258482 | orchestrator | 2025-05-14 14:29:47.259251 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 14:29:47.261289 | orchestrator | Wednesday 14 May 2025 14:29:47 +0000 (0:00:00.169) 0:00:46.520 ********* 2025-05-14 14:29:47.432404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:47.433273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:47.433641 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:47.434743 | orchestrator | 2025-05-14 14:29:47.435660 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 14:29:47.436080 | orchestrator | Wednesday 14 May 2025 14:29:47 +0000 (0:00:00.175) 0:00:46.696 ********* 2025-05-14 14:29:47.605900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:47.606309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:47.606352 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:47.606374 | orchestrator | 2025-05-14 14:29:47.607535 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 14:29:47.607889 | orchestrator | Wednesday 14 May 2025 14:29:47 +0000 (0:00:00.173) 0:00:46.869 ********* 2025-05-14 14:29:47.769023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:47.769274 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:47.769297 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:47.769311 | orchestrator | 2025-05-14 14:29:47.769323 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 14:29:47.770346 | orchestrator | Wednesday 14 May 2025 14:29:47 +0000 (0:00:00.163) 0:00:47.033 ********* 2025-05-14 14:29:48.343996 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:48.344857 | orchestrator | 2025-05-14 14:29:48.345241 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 14:29:48.346061 | orchestrator | Wednesday 14 May 2025 14:29:48 +0000 (0:00:00.575) 0:00:47.608 ********* 2025-05-14 14:29:48.888241 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:48.888903 | orchestrator | 2025-05-14 14:29:48.889169 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 14:29:48.889861 | orchestrator | Wednesday 14 May 2025 14:29:48 +0000 (0:00:00.542) 0:00:48.150 ********* 2025-05-14 14:29:49.237042 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:29:49.237119 | orchestrator | 2025-05-14 14:29:49.237134 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 14:29:49.237147 | orchestrator | Wednesday 14 May 2025 14:29:49 +0000 (0:00:00.349) 0:00:48.500 ********* 2025-05-14 14:29:49.443528 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'vg_name': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'}) 2025-05-14 14:29:49.443619 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'vg_name': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}) 2025-05-14 14:29:49.445205 | orchestrator | 2025-05-14 14:29:49.446131 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 14:29:49.446488 | orchestrator | Wednesday 14 May 2025 14:29:49 +0000 (0:00:00.207) 0:00:48.707 ********* 2025-05-14 14:29:49.624001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:49.624246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:49.625002 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:49.625532 | orchestrator | 2025-05-14 14:29:49.625763 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 14:29:49.628374 | orchestrator | Wednesday 14 May 2025 14:29:49 +0000 (0:00:00.179) 0:00:48.887 ********* 2025-05-14 14:29:49.798265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:49.803187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:49.803228 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:49.803243 | orchestrator | 2025-05-14 14:29:49.803557 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 14:29:49.804196 | orchestrator | Wednesday 14 May 2025 14:29:49 +0000 (0:00:00.174) 0:00:49.061 ********* 2025-05-14 14:29:49.968342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'})  2025-05-14 14:29:49.968542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'})  2025-05-14 14:29:49.968643 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:29:49.969197 | orchestrator | 2025-05-14 14:29:49.969442 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 14:29:49.969720 | orchestrator | Wednesday 14 May 2025 14:29:49 +0000 (0:00:00.171) 0:00:49.233 ********* 2025-05-14 14:29:50.829538 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:29:50.830193 | orchestrator |  "lvm_report": { 2025-05-14 14:29:50.831104 | orchestrator |  "lv": [ 2025-05-14 14:29:50.834106 | orchestrator |  { 2025-05-14 14:29:50.834126 | orchestrator |  "lv_name": "osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8", 2025-05-14 14:29:50.834139 | orchestrator |  "vg_name": "ceph-6248da54-4321-5f95-9f37-ef0f81563cc8" 2025-05-14 14:29:50.834149 | orchestrator |  }, 2025-05-14 14:29:50.835826 | orchestrator |  { 2025-05-14 14:29:50.836241 | orchestrator |  "lv_name": "osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6", 2025-05-14 14:29:50.836532 | orchestrator |  "vg_name": "ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6" 2025-05-14 14:29:50.838265 | orchestrator |  } 2025-05-14 14:29:50.838599 | orchestrator |  ], 2025-05-14 14:29:50.839230 | orchestrator |  "pv": [ 2025-05-14 14:29:50.840333 | orchestrator |  { 2025-05-14 14:29:50.840716 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 14:29:50.841447 | orchestrator |  "vg_name": "ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6" 2025-05-14 14:29:50.841567 | orchestrator |  }, 2025-05-14 14:29:50.843006 | orchestrator |  { 2025-05-14 14:29:50.843040 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 14:29:50.843385 | orchestrator |  "vg_name": "ceph-6248da54-4321-5f95-9f37-ef0f81563cc8" 2025-05-14 14:29:50.844284 | orchestrator |  } 2025-05-14 14:29:50.845400 | orchestrator |  ] 2025-05-14 14:29:50.845468 | orchestrator |  } 2025-05-14 14:29:50.849677 | orchestrator | } 2025-05-14 14:29:50.850105 | orchestrator | 2025-05-14 14:29:50.850579 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 14:29:50.850909 | orchestrator | 2025-05-14 14:29:50.851220 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 14:29:50.851541 | orchestrator | Wednesday 14 May 2025 14:29:50 +0000 (0:00:00.860) 0:00:50.093 ********* 2025-05-14 14:29:51.098742 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 14:29:51.099211 | orchestrator | 2025-05-14 14:29:51.100053 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 14:29:51.100483 | orchestrator | Wednesday 14 May 2025 14:29:51 +0000 (0:00:00.270) 0:00:50.363 ********* 2025-05-14 14:29:51.332500 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:29:51.332617 | orchestrator | 2025-05-14 14:29:51.332633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:51.334501 | orchestrator | Wednesday 14 May 2025 14:29:51 +0000 (0:00:00.230) 0:00:50.593 ********* 2025-05-14 14:29:51.777792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 14:29:51.777985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 14:29:51.779551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 14:29:51.779869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 14:29:51.783088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 14:29:51.783113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 14:29:51.783125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 14:29:51.783392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 14:29:51.784284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 14:29:51.784796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 14:29:51.785222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 14:29:51.785863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 14:29:51.786215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 14:29:51.786590 | orchestrator | 2025-05-14 14:29:51.787065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:51.787715 | orchestrator | Wednesday 14 May 2025 14:29:51 +0000 (0:00:00.446) 0:00:51.040 ********* 2025-05-14 14:29:51.979387 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:51.980033 | orchestrator | 2025-05-14 14:29:51.980797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:51.981765 | orchestrator | Wednesday 14 May 2025 14:29:51 +0000 (0:00:00.203) 0:00:51.243 ********* 2025-05-14 14:29:52.179583 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:52.181009 | orchestrator | 2025-05-14 14:29:52.181408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:52.184117 | orchestrator | Wednesday 14 May 2025 14:29:52 +0000 (0:00:00.199) 0:00:51.443 ********* 2025-05-14 14:29:52.377189 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:52.377316 | orchestrator | 2025-05-14 14:29:52.378648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:52.381287 | orchestrator | Wednesday 14 May 2025 14:29:52 +0000 (0:00:00.196) 0:00:51.640 ********* 2025-05-14 14:29:52.591192 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:52.591727 | orchestrator | 2025-05-14 14:29:52.592123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:52.592984 | orchestrator | Wednesday 14 May 2025 14:29:52 +0000 (0:00:00.212) 0:00:51.852 ********* 2025-05-14 14:29:52.778683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:52.779586 | orchestrator | 2025-05-14 14:29:52.780518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:52.781610 | orchestrator | Wednesday 14 May 2025 14:29:52 +0000 (0:00:00.189) 0:00:52.042 ********* 2025-05-14 14:29:53.268959 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:53.269410 | orchestrator | 2025-05-14 14:29:53.269568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:53.270261 | orchestrator | Wednesday 14 May 2025 14:29:53 +0000 (0:00:00.491) 0:00:52.534 ********* 2025-05-14 14:29:53.471064 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:53.471237 | orchestrator | 2025-05-14 14:29:53.472380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:53.473103 | orchestrator | Wednesday 14 May 2025 14:29:53 +0000 (0:00:00.201) 0:00:52.735 ********* 2025-05-14 14:29:53.661409 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:53.661936 | orchestrator | 2025-05-14 14:29:53.662473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:53.664405 | orchestrator | Wednesday 14 May 2025 14:29:53 +0000 (0:00:00.189) 0:00:52.924 ********* 2025-05-14 14:29:54.096704 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2) 2025-05-14 14:29:54.096996 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2) 2025-05-14 14:29:54.097728 | orchestrator | 2025-05-14 14:29:54.098518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:54.098895 | orchestrator | Wednesday 14 May 2025 14:29:54 +0000 (0:00:00.435) 0:00:53.360 ********* 2025-05-14 14:29:54.548488 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640) 2025-05-14 14:29:54.548599 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640) 2025-05-14 14:29:54.549134 | orchestrator | 2025-05-14 14:29:54.550254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:54.550697 | orchestrator | Wednesday 14 May 2025 14:29:54 +0000 (0:00:00.451) 0:00:53.812 ********* 2025-05-14 14:29:54.976585 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb) 2025-05-14 14:29:54.976835 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb) 2025-05-14 14:29:54.977852 | orchestrator | 2025-05-14 14:29:54.978670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:54.979575 | orchestrator | Wednesday 14 May 2025 14:29:54 +0000 (0:00:00.428) 0:00:54.240 ********* 2025-05-14 14:29:55.414305 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb) 2025-05-14 14:29:55.414461 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb) 2025-05-14 14:29:55.414478 | orchestrator | 2025-05-14 14:29:55.416160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 14:29:55.417307 | orchestrator | Wednesday 14 May 2025 14:29:55 +0000 (0:00:00.435) 0:00:54.676 ********* 2025-05-14 14:29:55.754843 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 14:29:55.754977 | orchestrator | 2025-05-14 14:29:55.755493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:55.755769 | orchestrator | Wednesday 14 May 2025 14:29:55 +0000 (0:00:00.341) 0:00:55.018 ********* 2025-05-14 14:29:56.202909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 14:29:56.203506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 14:29:56.204628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 14:29:56.205496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 14:29:56.206662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 14:29:56.207474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 14:29:56.207588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 14:29:56.208452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 14:29:56.208960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 14:29:56.209960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 14:29:56.210648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 14:29:56.210965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 14:29:56.211635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 14:29:56.212062 | orchestrator | 2025-05-14 14:29:56.212479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:56.213317 | orchestrator | Wednesday 14 May 2025 14:29:56 +0000 (0:00:00.447) 0:00:55.466 ********* 2025-05-14 14:29:56.788284 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:56.788393 | orchestrator | 2025-05-14 14:29:56.788588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:56.789258 | orchestrator | Wednesday 14 May 2025 14:29:56 +0000 (0:00:00.582) 0:00:56.049 ********* 2025-05-14 14:29:56.991666 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:56.992014 | orchestrator | 2025-05-14 14:29:56.992712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:56.993378 | orchestrator | Wednesday 14 May 2025 14:29:56 +0000 (0:00:00.206) 0:00:56.256 ********* 2025-05-14 14:29:57.195560 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:57.195729 | orchestrator | 2025-05-14 14:29:57.196142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:57.197057 | orchestrator | Wednesday 14 May 2025 14:29:57 +0000 (0:00:00.204) 0:00:56.460 ********* 2025-05-14 14:29:57.405403 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:57.405660 | orchestrator | 2025-05-14 14:29:57.406386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:57.406560 | orchestrator | Wednesday 14 May 2025 14:29:57 +0000 (0:00:00.210) 0:00:56.670 ********* 2025-05-14 14:29:57.608036 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:57.608765 | orchestrator | 2025-05-14 14:29:57.609502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:57.610223 | orchestrator | Wednesday 14 May 2025 14:29:57 +0000 (0:00:00.202) 0:00:56.872 ********* 2025-05-14 14:29:57.808475 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:57.809725 | orchestrator | 2025-05-14 14:29:57.812432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:57.813343 | orchestrator | Wednesday 14 May 2025 14:29:57 +0000 (0:00:00.199) 0:00:57.071 ********* 2025-05-14 14:29:58.000186 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:58.001063 | orchestrator | 2025-05-14 14:29:58.001946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:58.002754 | orchestrator | Wednesday 14 May 2025 14:29:57 +0000 (0:00:00.192) 0:00:57.264 ********* 2025-05-14 14:29:58.201226 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:58.203408 | orchestrator | 2025-05-14 14:29:58.203815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:58.204199 | orchestrator | Wednesday 14 May 2025 14:29:58 +0000 (0:00:00.199) 0:00:57.464 ********* 2025-05-14 14:29:59.044363 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 14:29:59.045639 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 14:29:59.045949 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 14:29:59.046935 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 14:29:59.050246 | orchestrator | 2025-05-14 14:29:59.050290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:59.050531 | orchestrator | Wednesday 14 May 2025 14:29:59 +0000 (0:00:00.842) 0:00:58.307 ********* 2025-05-14 14:29:59.259585 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:59.260064 | orchestrator | 2025-05-14 14:29:59.260813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:59.261472 | orchestrator | Wednesday 14 May 2025 14:29:59 +0000 (0:00:00.216) 0:00:58.524 ********* 2025-05-14 14:29:59.865874 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:29:59.866084 | orchestrator | 2025-05-14 14:29:59.866909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:29:59.867666 | orchestrator | Wednesday 14 May 2025 14:29:59 +0000 (0:00:00.604) 0:00:59.129 ********* 2025-05-14 14:30:00.088784 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:00.089479 | orchestrator | 2025-05-14 14:30:00.090344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 14:30:00.090985 | orchestrator | Wednesday 14 May 2025 14:30:00 +0000 (0:00:00.222) 0:00:59.351 ********* 2025-05-14 14:30:00.303952 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:00.304704 | orchestrator | 2025-05-14 14:30:00.304787 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 14:30:00.305041 | orchestrator | Wednesday 14 May 2025 14:30:00 +0000 (0:00:00.215) 0:00:59.567 ********* 2025-05-14 14:30:00.455491 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:00.456329 | orchestrator | 2025-05-14 14:30:00.457529 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 14:30:00.460523 | orchestrator | Wednesday 14 May 2025 14:30:00 +0000 (0:00:00.152) 0:00:59.719 ********* 2025-05-14 14:30:00.667004 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dde3cc5c-c032-592e-96b0-b740b8614a8d'}}) 2025-05-14 14:30:00.667210 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5402478b-0937-58a5-a80f-00ed6e381d0d'}}) 2025-05-14 14:30:00.667952 | orchestrator | 2025-05-14 14:30:00.668497 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 14:30:00.669187 | orchestrator | Wednesday 14 May 2025 14:30:00 +0000 (0:00:00.211) 0:00:59.931 ********* 2025-05-14 14:30:02.586905 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'}) 2025-05-14 14:30:02.587004 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'}) 2025-05-14 14:30:02.587014 | orchestrator | 2025-05-14 14:30:02.587763 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 14:30:02.590478 | orchestrator | Wednesday 14 May 2025 14:30:02 +0000 (0:00:01.917) 0:01:01.848 ********* 2025-05-14 14:30:02.746804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:02.747918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:02.749228 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:02.751265 | orchestrator | 2025-05-14 14:30:02.751630 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 14:30:02.751909 | orchestrator | Wednesday 14 May 2025 14:30:02 +0000 (0:00:00.161) 0:01:02.010 ********* 2025-05-14 14:30:04.106935 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'}) 2025-05-14 14:30:04.107237 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'}) 2025-05-14 14:30:04.108316 | orchestrator | 2025-05-14 14:30:04.109125 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 14:30:04.110070 | orchestrator | Wednesday 14 May 2025 14:30:04 +0000 (0:00:01.360) 0:01:03.370 ********* 2025-05-14 14:30:04.272761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:04.272862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:04.273896 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:04.274949 | orchestrator | 2025-05-14 14:30:04.275391 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 14:30:04.276167 | orchestrator | Wednesday 14 May 2025 14:30:04 +0000 (0:00:00.166) 0:01:03.537 ********* 2025-05-14 14:30:04.606009 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:04.607299 | orchestrator | 2025-05-14 14:30:04.609107 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 14:30:04.610575 | orchestrator | Wednesday 14 May 2025 14:30:04 +0000 (0:00:00.326) 0:01:03.864 ********* 2025-05-14 14:30:04.775635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:04.775729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:04.775797 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:04.777235 | orchestrator | 2025-05-14 14:30:04.777256 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 14:30:04.777425 | orchestrator | Wednesday 14 May 2025 14:30:04 +0000 (0:00:00.173) 0:01:04.037 ********* 2025-05-14 14:30:04.926361 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:04.926623 | orchestrator | 2025-05-14 14:30:04.926644 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 14:30:04.929072 | orchestrator | Wednesday 14 May 2025 14:30:04 +0000 (0:00:00.154) 0:01:04.191 ********* 2025-05-14 14:30:05.102496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:05.102945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:05.103826 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:05.105312 | orchestrator | 2025-05-14 14:30:05.106539 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 14:30:05.107031 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.172) 0:01:04.364 ********* 2025-05-14 14:30:05.237617 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:05.238332 | orchestrator | 2025-05-14 14:30:05.239462 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 14:30:05.242136 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.135) 0:01:04.500 ********* 2025-05-14 14:30:05.403904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:05.404890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:05.405783 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:05.406592 | orchestrator | 2025-05-14 14:30:05.407303 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 14:30:05.407843 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.166) 0:01:04.667 ********* 2025-05-14 14:30:05.530924 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:05.531107 | orchestrator | 2025-05-14 14:30:05.531706 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 14:30:05.532360 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.127) 0:01:04.795 ********* 2025-05-14 14:30:05.701469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:05.702129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:05.702528 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:05.703373 | orchestrator | 2025-05-14 14:30:05.704683 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 14:30:05.705195 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.170) 0:01:04.965 ********* 2025-05-14 14:30:05.893179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:05.895068 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:05.898156 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:05.898380 | orchestrator | 2025-05-14 14:30:05.899039 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 14:30:05.899200 | orchestrator | Wednesday 14 May 2025 14:30:05 +0000 (0:00:00.191) 0:01:05.157 ********* 2025-05-14 14:30:06.069684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:06.070437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:06.071244 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:06.071921 | orchestrator | 2025-05-14 14:30:06.074705 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 14:30:06.074731 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.176) 0:01:05.334 ********* 2025-05-14 14:30:06.206741 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:06.206852 | orchestrator | 2025-05-14 14:30:06.206874 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 14:30:06.207212 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.137) 0:01:05.471 ********* 2025-05-14 14:30:06.548653 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:06.548812 | orchestrator | 2025-05-14 14:30:06.549241 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 14:30:06.550141 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.341) 0:01:05.812 ********* 2025-05-14 14:30:06.694331 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:06.694522 | orchestrator | 2025-05-14 14:30:06.695641 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 14:30:06.696449 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.145) 0:01:05.958 ********* 2025-05-14 14:30:06.839981 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:30:06.841118 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 14:30:06.842537 | orchestrator | } 2025-05-14 14:30:06.844706 | orchestrator | 2025-05-14 14:30:06.845028 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 14:30:06.845253 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.145) 0:01:06.103 ********* 2025-05-14 14:30:06.985512 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:30:06.988205 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 14:30:06.988240 | orchestrator | } 2025-05-14 14:30:06.988253 | orchestrator | 2025-05-14 14:30:06.989045 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 14:30:06.990305 | orchestrator | Wednesday 14 May 2025 14:30:06 +0000 (0:00:00.144) 0:01:06.248 ********* 2025-05-14 14:30:07.138595 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:30:07.139348 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 14:30:07.140717 | orchestrator | } 2025-05-14 14:30:07.141768 | orchestrator | 2025-05-14 14:30:07.142872 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 14:30:07.143348 | orchestrator | Wednesday 14 May 2025 14:30:07 +0000 (0:00:00.151) 0:01:06.400 ********* 2025-05-14 14:30:07.659669 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:07.659858 | orchestrator | 2025-05-14 14:30:07.660766 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 14:30:07.660843 | orchestrator | Wednesday 14 May 2025 14:30:07 +0000 (0:00:00.523) 0:01:06.923 ********* 2025-05-14 14:30:08.172634 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:08.173204 | orchestrator | 2025-05-14 14:30:08.173938 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 14:30:08.176554 | orchestrator | Wednesday 14 May 2025 14:30:08 +0000 (0:00:00.511) 0:01:07.435 ********* 2025-05-14 14:30:08.685107 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:08.685401 | orchestrator | 2025-05-14 14:30:08.685793 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 14:30:08.686272 | orchestrator | Wednesday 14 May 2025 14:30:08 +0000 (0:00:00.514) 0:01:07.949 ********* 2025-05-14 14:30:08.850118 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:08.853711 | orchestrator | 2025-05-14 14:30:08.853753 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 14:30:08.854332 | orchestrator | Wednesday 14 May 2025 14:30:08 +0000 (0:00:00.162) 0:01:08.112 ********* 2025-05-14 14:30:08.958851 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:08.959926 | orchestrator | 2025-05-14 14:30:08.960777 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 14:30:08.961192 | orchestrator | Wednesday 14 May 2025 14:30:08 +0000 (0:00:00.110) 0:01:08.223 ********* 2025-05-14 14:30:09.078241 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:09.078646 | orchestrator | 2025-05-14 14:30:09.079123 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 14:30:09.079899 | orchestrator | Wednesday 14 May 2025 14:30:09 +0000 (0:00:00.119) 0:01:08.342 ********* 2025-05-14 14:30:09.418380 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:30:09.419251 | orchestrator |  "vgs_report": { 2025-05-14 14:30:09.420876 | orchestrator |  "vg": [] 2025-05-14 14:30:09.423392 | orchestrator |  } 2025-05-14 14:30:09.423441 | orchestrator | } 2025-05-14 14:30:09.423453 | orchestrator | 2025-05-14 14:30:09.424138 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 14:30:09.424286 | orchestrator | Wednesday 14 May 2025 14:30:09 +0000 (0:00:00.340) 0:01:08.682 ********* 2025-05-14 14:30:09.567145 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:09.567951 | orchestrator | 2025-05-14 14:30:09.569316 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 14:30:09.570467 | orchestrator | Wednesday 14 May 2025 14:30:09 +0000 (0:00:00.148) 0:01:08.831 ********* 2025-05-14 14:30:09.726524 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:09.726706 | orchestrator | 2025-05-14 14:30:09.727160 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 14:30:09.727640 | orchestrator | Wednesday 14 May 2025 14:30:09 +0000 (0:00:00.159) 0:01:08.990 ********* 2025-05-14 14:30:09.870275 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:09.870435 | orchestrator | 2025-05-14 14:30:09.873541 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 14:30:09.873735 | orchestrator | Wednesday 14 May 2025 14:30:09 +0000 (0:00:00.141) 0:01:09.132 ********* 2025-05-14 14:30:10.024342 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.024461 | orchestrator | 2025-05-14 14:30:10.024575 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 14:30:10.025508 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.156) 0:01:09.288 ********* 2025-05-14 14:30:10.166888 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.167136 | orchestrator | 2025-05-14 14:30:10.167516 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 14:30:10.168229 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.142) 0:01:09.430 ********* 2025-05-14 14:30:10.309134 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.309301 | orchestrator | 2025-05-14 14:30:10.309837 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 14:30:10.310648 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.142) 0:01:09.573 ********* 2025-05-14 14:30:10.462884 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.463142 | orchestrator | 2025-05-14 14:30:10.463996 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 14:30:10.464656 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.153) 0:01:09.727 ********* 2025-05-14 14:30:10.618102 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.618194 | orchestrator | 2025-05-14 14:30:10.620218 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 14:30:10.624908 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.153) 0:01:09.880 ********* 2025-05-14 14:30:10.748930 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.749368 | orchestrator | 2025-05-14 14:30:10.749954 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 14:30:10.750366 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.133) 0:01:10.013 ********* 2025-05-14 14:30:10.891100 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:10.892089 | orchestrator | 2025-05-14 14:30:10.893155 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 14:30:10.893925 | orchestrator | Wednesday 14 May 2025 14:30:10 +0000 (0:00:00.138) 0:01:10.152 ********* 2025-05-14 14:30:11.021651 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:11.022671 | orchestrator | 2025-05-14 14:30:11.023523 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 14:30:11.024080 | orchestrator | Wednesday 14 May 2025 14:30:11 +0000 (0:00:00.133) 0:01:10.285 ********* 2025-05-14 14:30:11.391922 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:11.392027 | orchestrator | 2025-05-14 14:30:11.392870 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 14:30:11.393555 | orchestrator | Wednesday 14 May 2025 14:30:11 +0000 (0:00:00.369) 0:01:10.654 ********* 2025-05-14 14:30:11.534676 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:11.534951 | orchestrator | 2025-05-14 14:30:11.535398 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 14:30:11.536082 | orchestrator | Wednesday 14 May 2025 14:30:11 +0000 (0:00:00.143) 0:01:10.798 ********* 2025-05-14 14:30:11.689002 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:11.690146 | orchestrator | 2025-05-14 14:30:11.690271 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 14:30:11.692641 | orchestrator | Wednesday 14 May 2025 14:30:11 +0000 (0:00:00.153) 0:01:10.952 ********* 2025-05-14 14:30:11.858609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:11.859255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:11.859904 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:11.861559 | orchestrator | 2025-05-14 14:30:11.861599 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 14:30:11.861610 | orchestrator | Wednesday 14 May 2025 14:30:11 +0000 (0:00:00.169) 0:01:11.121 ********* 2025-05-14 14:30:12.021743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.021956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.022671 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.023108 | orchestrator | 2025-05-14 14:30:12.025996 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 14:30:12.026098 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.162) 0:01:11.284 ********* 2025-05-14 14:30:12.216258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.216541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.217375 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.218271 | orchestrator | 2025-05-14 14:30:12.218699 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 14:30:12.219133 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.195) 0:01:11.480 ********* 2025-05-14 14:30:12.385638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.386177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.387525 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.389192 | orchestrator | 2025-05-14 14:30:12.389243 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 14:30:12.389974 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.168) 0:01:11.649 ********* 2025-05-14 14:30:12.555218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.555513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.555877 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.556401 | orchestrator | 2025-05-14 14:30:12.556920 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 14:30:12.557630 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.170) 0:01:11.819 ********* 2025-05-14 14:30:12.718833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.718935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.720065 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.722872 | orchestrator | 2025-05-14 14:30:12.723223 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 14:30:12.723649 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.161) 0:01:11.981 ********* 2025-05-14 14:30:12.893456 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:12.893857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:12.894914 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:12.895599 | orchestrator | 2025-05-14 14:30:12.896308 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 14:30:12.897032 | orchestrator | Wednesday 14 May 2025 14:30:12 +0000 (0:00:00.174) 0:01:12.156 ********* 2025-05-14 14:30:13.056378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:13.056760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:13.057635 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:13.058211 | orchestrator | 2025-05-14 14:30:13.058916 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 14:30:13.060127 | orchestrator | Wednesday 14 May 2025 14:30:13 +0000 (0:00:00.164) 0:01:12.320 ********* 2025-05-14 14:30:13.772844 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:13.773014 | orchestrator | 2025-05-14 14:30:13.773355 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 14:30:13.773901 | orchestrator | Wednesday 14 May 2025 14:30:13 +0000 (0:00:00.715) 0:01:13.036 ********* 2025-05-14 14:30:14.288052 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:14.289116 | orchestrator | 2025-05-14 14:30:14.290351 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 14:30:14.291929 | orchestrator | Wednesday 14 May 2025 14:30:14 +0000 (0:00:00.514) 0:01:13.551 ********* 2025-05-14 14:30:14.446526 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:14.448820 | orchestrator | 2025-05-14 14:30:14.449175 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 14:30:14.450461 | orchestrator | Wednesday 14 May 2025 14:30:14 +0000 (0:00:00.159) 0:01:13.710 ********* 2025-05-14 14:30:14.636645 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'vg_name': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'}) 2025-05-14 14:30:14.636844 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'vg_name': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'}) 2025-05-14 14:30:14.637230 | orchestrator | 2025-05-14 14:30:14.637646 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 14:30:14.638162 | orchestrator | Wednesday 14 May 2025 14:30:14 +0000 (0:00:00.191) 0:01:13.901 ********* 2025-05-14 14:30:14.809554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:14.810491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:14.811494 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:14.814322 | orchestrator | 2025-05-14 14:30:14.814348 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 14:30:14.814362 | orchestrator | Wednesday 14 May 2025 14:30:14 +0000 (0:00:00.172) 0:01:14.073 ********* 2025-05-14 14:30:14.978979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:14.979531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:14.982644 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:14.982664 | orchestrator | 2025-05-14 14:30:14.982672 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 14:30:14.982680 | orchestrator | Wednesday 14 May 2025 14:30:14 +0000 (0:00:00.168) 0:01:14.241 ********* 2025-05-14 14:30:15.169465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'})  2025-05-14 14:30:15.169555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'})  2025-05-14 14:30:15.169656 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:15.169751 | orchestrator | 2025-05-14 14:30:15.170247 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 14:30:15.170465 | orchestrator | Wednesday 14 May 2025 14:30:15 +0000 (0:00:00.192) 0:01:14.433 ********* 2025-05-14 14:30:15.802910 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:30:15.803373 | orchestrator |  "lvm_report": { 2025-05-14 14:30:15.805242 | orchestrator |  "lv": [ 2025-05-14 14:30:15.806161 | orchestrator |  { 2025-05-14 14:30:15.806733 | orchestrator |  "lv_name": "osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d", 2025-05-14 14:30:15.807399 | orchestrator |  "vg_name": "ceph-5402478b-0937-58a5-a80f-00ed6e381d0d" 2025-05-14 14:30:15.808493 | orchestrator |  }, 2025-05-14 14:30:15.808936 | orchestrator |  { 2025-05-14 14:30:15.809942 | orchestrator |  "lv_name": "osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d", 2025-05-14 14:30:15.810829 | orchestrator |  "vg_name": "ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d" 2025-05-14 14:30:15.811179 | orchestrator |  } 2025-05-14 14:30:15.811611 | orchestrator |  ], 2025-05-14 14:30:15.813139 | orchestrator |  "pv": [ 2025-05-14 14:30:15.813661 | orchestrator |  { 2025-05-14 14:30:15.814636 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 14:30:15.816240 | orchestrator |  "vg_name": "ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d" 2025-05-14 14:30:15.816530 | orchestrator |  }, 2025-05-14 14:30:15.817310 | orchestrator |  { 2025-05-14 14:30:15.818176 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 14:30:15.818714 | orchestrator |  "vg_name": "ceph-5402478b-0937-58a5-a80f-00ed6e381d0d" 2025-05-14 14:30:15.819465 | orchestrator |  } 2025-05-14 14:30:15.820257 | orchestrator |  ] 2025-05-14 14:30:15.820829 | orchestrator |  } 2025-05-14 14:30:15.821477 | orchestrator | } 2025-05-14 14:30:15.821788 | orchestrator | 2025-05-14 14:30:15.823075 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:30:15.823123 | orchestrator | 2025-05-14 14:30:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:30:15.823139 | orchestrator | 2025-05-14 14:30:15 | INFO  | Please wait and do not abort execution. 2025-05-14 14:30:15.823569 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 14:30:15.824563 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 14:30:15.825238 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 14:30:15.826179 | orchestrator | 2025-05-14 14:30:15.826699 | orchestrator | 2025-05-14 14:30:15.826972 | orchestrator | 2025-05-14 14:30:15.827557 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:30:15.828106 | orchestrator | Wednesday 14 May 2025 14:30:15 +0000 (0:00:00.633) 0:01:15.067 ********* 2025-05-14 14:30:15.828574 | orchestrator | =============================================================================== 2025-05-14 14:30:15.829202 | orchestrator | Create block VGs -------------------------------------------------------- 6.11s 2025-05-14 14:30:15.829624 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2025-05-14 14:30:15.831024 | orchestrator | Print LVM report data --------------------------------------------------- 2.16s 2025-05-14 14:30:15.831795 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.06s 2025-05-14 14:30:15.832361 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2025-05-14 14:30:15.833819 | orchestrator | Add known links to the list of available block devices ------------------ 1.61s 2025-05-14 14:30:15.834350 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-05-14 14:30:15.834655 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-05-14 14:30:15.835658 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-05-14 14:30:15.836644 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-05-14 14:30:15.837318 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.10s 2025-05-14 14:30:15.838127 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-14 14:30:15.838793 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-14 14:30:15.839740 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.74s 2025-05-14 14:30:15.840304 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-05-14 14:30:15.841098 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-14 14:30:15.841988 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.66s 2025-05-14 14:30:15.842996 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 14:30:15.843538 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 14:30:15.843929 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 14:30:17.795440 | orchestrator | 2025-05-14 14:30:17 | INFO  | Task c205c682-7e84-477b-af7e-07ee4c777570 (facts) was prepared for execution. 2025-05-14 14:30:17.795572 | orchestrator | 2025-05-14 14:30:17 | INFO  | It takes a moment until task c205c682-7e84-477b-af7e-07ee4c777570 (facts) has been started and output is visible here. 2025-05-14 14:30:20.651344 | orchestrator | 2025-05-14 14:30:20.651515 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 14:30:20.651602 | orchestrator | 2025-05-14 14:30:20.652332 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 14:30:20.653816 | orchestrator | Wednesday 14 May 2025 14:30:20 +0000 (0:00:00.148) 0:00:00.148 ********* 2025-05-14 14:30:21.483658 | orchestrator | ok: [testbed-manager] 2025-05-14 14:30:21.483773 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:30:21.483938 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:30:21.486264 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:30:21.486477 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:30:21.487662 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:30:21.489121 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:21.490113 | orchestrator | 2025-05-14 14:30:21.490776 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 14:30:21.491673 | orchestrator | Wednesday 14 May 2025 14:30:21 +0000 (0:00:00.831) 0:00:00.979 ********* 2025-05-14 14:30:21.621103 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:30:21.690716 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:30:21.760803 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:30:21.828682 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:30:21.895312 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:30:22.508199 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:30:22.508923 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:22.510171 | orchestrator | 2025-05-14 14:30:22.511590 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 14:30:22.511843 | orchestrator | 2025-05-14 14:30:22.513358 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 14:30:22.514687 | orchestrator | Wednesday 14 May 2025 14:30:22 +0000 (0:00:01.028) 0:00:02.008 ********* 2025-05-14 14:30:27.005701 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:30:27.011145 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:30:27.011845 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:30:27.012846 | orchestrator | ok: [testbed-manager] 2025-05-14 14:30:27.014665 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:30:27.016163 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:30:27.016859 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:30:27.021823 | orchestrator | 2025-05-14 14:30:27.022735 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 14:30:27.023622 | orchestrator | 2025-05-14 14:30:27.024464 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 14:30:27.025041 | orchestrator | Wednesday 14 May 2025 14:30:27 +0000 (0:00:04.494) 0:00:06.502 ********* 2025-05-14 14:30:27.313796 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:30:27.390786 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:30:27.473304 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:30:27.544355 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:30:27.628835 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:30:27.678250 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:30:27.678720 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:30:27.679103 | orchestrator | 2025-05-14 14:30:27.679865 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:30:27.680269 | orchestrator | 2025-05-14 14:30:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 14:30:27.681967 | orchestrator | 2025-05-14 14:30:27 | INFO  | Please wait and do not abort execution. 2025-05-14 14:30:27.682831 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.683511 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.684390 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.685234 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.685737 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.686941 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.687800 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:30:27.688233 | orchestrator | 2025-05-14 14:30:27.688730 | orchestrator | Wednesday 14 May 2025 14:30:27 +0000 (0:00:00.671) 0:00:07.173 ********* 2025-05-14 14:30:27.689768 | orchestrator | =============================================================================== 2025-05-14 14:30:27.689931 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.49s 2025-05-14 14:30:27.690579 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2025-05-14 14:30:27.691170 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.83s 2025-05-14 14:30:27.691733 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-05-14 14:30:28.224326 | orchestrator | 2025-05-14 14:30:28.226209 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed May 14 14:30:28 UTC 2025 2025-05-14 14:30:28.226266 | orchestrator | 2025-05-14 14:30:29.595688 | orchestrator | 2025-05-14 14:30:29 | INFO  | Collection nutshell is prepared for execution 2025-05-14 14:30:29.595761 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [0] - dotfiles 2025-05-14 14:30:29.599894 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [0] - homer 2025-05-14 14:30:29.599975 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [0] - netdata 2025-05-14 14:30:29.600017 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [0] - openstackclient 2025-05-14 14:30:29.600023 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [0] - phpmyadmin 2025-05-14 14:30:29.600028 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [0] - common 2025-05-14 14:30:29.601414 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [1] -- loadbalancer 2025-05-14 14:30:29.601432 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [2] --- opensearch 2025-05-14 14:30:29.601569 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [2] --- mariadb-ng 2025-05-14 14:30:29.601576 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [3] ---- horizon 2025-05-14 14:30:29.601760 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [3] ---- keystone 2025-05-14 14:30:29.601814 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [4] ----- neutron 2025-05-14 14:30:29.601821 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ wait-for-nova 2025-05-14 14:30:29.601866 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [5] ------ octavia 2025-05-14 14:30:29.602327 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- barbican 2025-05-14 14:30:29.602337 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- designate 2025-05-14 14:30:29.602431 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- ironic 2025-05-14 14:30:29.602577 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- placement 2025-05-14 14:30:29.602584 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- magnum 2025-05-14 14:30:29.602819 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [1] -- openvswitch 2025-05-14 14:30:29.602906 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [2] --- ovn 2025-05-14 14:30:29.603117 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [1] -- memcached 2025-05-14 14:30:29.603215 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [1] -- redis 2025-05-14 14:30:29.603222 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [1] -- rabbitmq-ng 2025-05-14 14:30:29.603374 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [0] - kubernetes 2025-05-14 14:30:29.603511 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [1] -- kubeconfig 2025-05-14 14:30:29.603519 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [1] -- copy-kubeconfig 2025-05-14 14:30:29.603688 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [0] - ceph 2025-05-14 14:30:29.605110 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [1] -- ceph-pools 2025-05-14 14:30:29.605120 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [2] --- copy-ceph-keys 2025-05-14 14:30:29.605125 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [3] ---- cephclient 2025-05-14 14:30:29.605192 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-14 14:30:29.605199 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [4] ----- wait-for-keystone 2025-05-14 14:30:29.605228 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-14 14:30:29.605291 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ glance 2025-05-14 14:30:29.605455 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ cinder 2025-05-14 14:30:29.605464 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ nova 2025-05-14 14:30:29.605674 | orchestrator | 2025-05-14 14:30:29 | INFO  | A [4] ----- prometheus 2025-05-14 14:30:29.605681 | orchestrator | 2025-05-14 14:30:29 | INFO  | D [5] ------ grafana 2025-05-14 14:30:29.724651 | orchestrator | 2025-05-14 14:30:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-14 14:30:29.724716 | orchestrator | 2025-05-14 14:30:29 | INFO  | Tasks are running in the background 2025-05-14 14:30:31.612632 | orchestrator | 2025-05-14 14:30:31 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-14 14:30:33.714259 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:33.714863 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state STARTED 2025-05-14 14:30:33.715292 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:33.715750 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:33.716137 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:33.716568 | orchestrator | 2025-05-14 14:30:33 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:33.716611 | orchestrator | 2025-05-14 14:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:36.761342 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:36.761476 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state STARTED 2025-05-14 14:30:36.761746 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:36.762149 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:36.764181 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:36.764636 | orchestrator | 2025-05-14 14:30:36 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:36.764710 | orchestrator | 2025-05-14 14:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:39.802347 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:39.802520 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state STARTED 2025-05-14 14:30:39.803118 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:39.803352 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:39.804738 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:39.806187 | orchestrator | 2025-05-14 14:30:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:39.806612 | orchestrator | 2025-05-14 14:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:42.869718 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:42.877259 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state STARTED 2025-05-14 14:30:42.880163 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:42.880186 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:42.884840 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:42.886977 | orchestrator | 2025-05-14 14:30:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:42.892069 | orchestrator | 2025-05-14 14:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:45.946180 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:45.946667 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state STARTED 2025-05-14 14:30:45.947296 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:45.947766 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:45.952009 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:45.952536 | orchestrator | 2025-05-14 14:30:45 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:45.952558 | orchestrator | 2025-05-14 14:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:49.006471 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:30:49.006579 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:49.007339 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task eadb319e-9533-4de5-9d2a-34a86992b241 is in state SUCCESS 2025-05-14 14:30:49.008448 | orchestrator | 2025-05-14 14:30:49.008476 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-14 14:30:49.008488 | orchestrator | 2025-05-14 14:30:49.008499 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-14 14:30:49.008511 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:00.246) 0:00:00.246 ********* 2025-05-14 14:30:49.008523 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:30:49.008535 | orchestrator | changed: [testbed-manager] 2025-05-14 14:30:49.008545 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:30:49.008556 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:30:49.008567 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:30:49.008578 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:30:49.008588 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:30:49.008599 | orchestrator | 2025-05-14 14:30:49.008610 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-14 14:30:49.008621 | orchestrator | Wednesday 14 May 2025 14:30:40 +0000 (0:00:03.135) 0:00:03.382 ********* 2025-05-14 14:30:49.008632 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-14 14:30:49.008643 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 14:30:49.008653 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 14:30:49.008664 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 14:30:49.008674 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 14:30:49.008685 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 14:30:49.008695 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 14:30:49.008728 | orchestrator | 2025-05-14 14:30:49.008739 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-14 14:30:49.008750 | orchestrator | Wednesday 14 May 2025 14:30:41 +0000 (0:00:01.638) 0:00:05.020 ********* 2025-05-14 14:30:49.008772 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:40.847838', 'end': '2025-05-14 14:30:40.851883', 'delta': '0:00:00.004045', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008788 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:40.843356', 'end': '2025-05-14 14:30:40.852848', 'delta': '0:00:00.009492', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008804 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:40.863798', 'end': '2025-05-14 14:30:40.871566', 'delta': '0:00:00.007768', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008837 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:41.048025', 'end': '2025-05-14 14:30:41.056408', 'delta': '0:00:00.008383', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008850 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:41.300388', 'end': '2025-05-14 14:30:41.309730', 'delta': '0:00:00.009342', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008869 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:41.472753', 'end': '2025-05-14 14:30:41.477569', 'delta': '0:00:00.004816', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008881 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 14:30:41.583850', 'end': '2025-05-14 14:30:41.591920', 'delta': '0:00:00.008070', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 14:30:49.008892 | orchestrator | 2025-05-14 14:30:49.008908 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-14 14:30:49.008920 | orchestrator | Wednesday 14 May 2025 14:30:44 +0000 (0:00:02.514) 0:00:07.534 ********* 2025-05-14 14:30:49.008931 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-14 14:30:49.008942 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 14:30:49.008952 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 14:30:49.008963 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 14:30:49.008986 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 14:30:49.008997 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 14:30:49.009008 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 14:30:49.009054 | orchestrator | 2025-05-14 14:30:49.009067 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:30:49.009080 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009094 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009106 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009126 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009139 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009151 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009169 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:30:49.009182 | orchestrator | 2025-05-14 14:30:49.009194 | orchestrator | Wednesday 14 May 2025 14:30:46 +0000 (0:00:02.376) 0:00:09.911 ********* 2025-05-14 14:30:49.009207 | orchestrator | =============================================================================== 2025-05-14 14:30:49.009220 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.13s 2025-05-14 14:30:49.009232 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.51s 2025-05-14 14:30:49.009245 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.38s 2025-05-14 14:30:49.009257 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.64s 2025-05-14 14:30:49.009337 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:49.010795 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:49.012572 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:49.013460 | orchestrator | 2025-05-14 14:30:49 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:49.013610 | orchestrator | 2025-05-14 14:30:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:52.093253 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:30:52.093344 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:52.093377 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:52.093435 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:52.096181 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:52.097209 | orchestrator | 2025-05-14 14:30:52 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:52.097253 | orchestrator | 2025-05-14 14:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:55.159006 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:30:55.159454 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:55.160401 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:55.160439 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:55.161789 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:55.162743 | orchestrator | 2025-05-14 14:30:55 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:55.162777 | orchestrator | 2025-05-14 14:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:30:58.209497 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:30:58.210199 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:30:58.212831 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:30:58.215851 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:30:58.215904 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:30:58.216605 | orchestrator | 2025-05-14 14:30:58 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:30:58.216618 | orchestrator | 2025-05-14 14:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:01.271463 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:01.272465 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:31:01.272506 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:01.273372 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:01.275162 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:01.275205 | orchestrator | 2025-05-14 14:31:01 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:01.275974 | orchestrator | 2025-05-14 14:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:04.336588 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:04.337574 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:31:04.339854 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:04.341667 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:04.344076 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:04.344106 | orchestrator | 2025-05-14 14:31:04 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:04.344118 | orchestrator | 2025-05-14 14:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:07.398683 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:07.399230 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:31:07.401727 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:07.403129 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:07.405916 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:07.407661 | orchestrator | 2025-05-14 14:31:07 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:07.408467 | orchestrator | 2025-05-14 14:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:10.477444 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:10.480624 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state STARTED 2025-05-14 14:31:10.486863 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:10.486893 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:10.488793 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:10.489065 | orchestrator | 2025-05-14 14:31:10 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:10.489088 | orchestrator | 2025-05-14 14:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:13.538912 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:13.539257 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task f513dc4e-3168-4479-9ec0-1ae7077cd11b is in state SUCCESS 2025-05-14 14:31:13.541201 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:13.544339 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:13.544914 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:13.546654 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:13.547983 | orchestrator | 2025-05-14 14:31:13 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:13.548544 | orchestrator | 2025-05-14 14:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:16.595704 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:16.596717 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:16.599588 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:16.600645 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:16.603518 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:16.608285 | orchestrator | 2025-05-14 14:31:16 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:16.608315 | orchestrator | 2025-05-14 14:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:19.658559 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:19.659356 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:19.659426 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:19.660668 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:19.664090 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:19.664117 | orchestrator | 2025-05-14 14:31:19 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:19.664129 | orchestrator | 2025-05-14 14:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:22.754852 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:22.759655 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:22.761044 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:22.761962 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:22.763580 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:22.764203 | orchestrator | 2025-05-14 14:31:22 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:22.764229 | orchestrator | 2025-05-14 14:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:25.833549 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:25.839489 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:25.840140 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:25.841789 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:25.845554 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state STARTED 2025-05-14 14:31:25.847135 | orchestrator | 2025-05-14 14:31:25 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:25.847162 | orchestrator | 2025-05-14 14:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:28.886283 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:28.889701 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:28.891226 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:28.892814 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:28.893813 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task 169cce63-28e4-4df1-86cd-2650bbb5da3d is in state SUCCESS 2025-05-14 14:31:28.894320 | orchestrator | 2025-05-14 14:31:28 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:28.894965 | orchestrator | 2025-05-14 14:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:31.929567 | orchestrator | 2025-05-14 14:31:31 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:31.929665 | orchestrator | 2025-05-14 14:31:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:31.929681 | orchestrator | 2025-05-14 14:31:31 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:31.929693 | orchestrator | 2025-05-14 14:31:31 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:31.929704 | orchestrator | 2025-05-14 14:31:31 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:31.929715 | orchestrator | 2025-05-14 14:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:34.964537 | orchestrator | 2025-05-14 14:31:34 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:34.966177 | orchestrator | 2025-05-14 14:31:34 | [1mINFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:34.968555 | orchestrator | 2025-05-14 14:31:34 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:34.970544 | orchestrator | 2025-05-14 14:31:34 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:34.971694 | orchestrator | 2025-05-14 14:31:34 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:34.971998 | orchestrator | 2025-05-14 14:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:38.011468 | orchestrator | 2025-05-14 14:31:38 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:38.011688 | orchestrator | 2025-05-14 14:31:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:38.012378 | orchestrator | 2025-05-14 14:31:38 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state STARTED 2025-05-14 14:31:38.013054 | orchestrator | 2025-05-14 14:31:38 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:38.013734 | orchestrator | 2025-05-14 14:31:38 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:38.013964 | orchestrator | 2025-05-14 14:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:41.052882 | orchestrator | 2025-05-14 14:31:41 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:41.053389 | orchestrator | 2025-05-14 14:31:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:41.054102 | orchestrator | 2025-05-14 14:31:41 | INFO  | Task 9584bd27-1873-4142-a495-e06bd0b5eefa is in state SUCCESS 2025-05-14 14:31:41.055508 | orchestrator | 2025-05-14 14:31:41.055555 | orchestrator | 2025-05-14 14:31:41.055575 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-14 14:31:41.055589 | orchestrator | 2025-05-14 14:31:41.055600 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-14 14:31:41.055611 | orchestrator | Wednesday 14 May 2025 14:30:36 +0000 (0:00:00.361) 0:00:00.361 ********* 2025-05-14 14:31:41.055622 | orchestrator | ok: [testbed-manager] => { 2025-05-14 14:31:41.055677 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-14 14:31:41.055691 | orchestrator | } 2025-05-14 14:31:41.055702 | orchestrator | 2025-05-14 14:31:41.055713 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-14 14:31:41.055724 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:00.236) 0:00:00.598 ********* 2025-05-14 14:31:41.055735 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.055746 | orchestrator | 2025-05-14 14:31:41.055757 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-14 14:31:41.055767 | orchestrator | Wednesday 14 May 2025 14:30:38 +0000 (0:00:01.365) 0:00:01.964 ********* 2025-05-14 14:31:41.055778 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-14 14:31:41.055795 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-14 14:31:41.055806 | orchestrator | 2025-05-14 14:31:41.055817 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-14 14:31:41.055828 | orchestrator | Wednesday 14 May 2025 14:30:39 +0000 (0:00:00.985) 0:00:02.949 ********* 2025-05-14 14:31:41.055838 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.055849 | orchestrator | 2025-05-14 14:31:41.055860 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-14 14:31:41.055871 | orchestrator | Wednesday 14 May 2025 14:30:42 +0000 (0:00:02.996) 0:00:05.945 ********* 2025-05-14 14:31:41.055882 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.055893 | orchestrator | 2025-05-14 14:31:41.055903 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-14 14:31:41.055914 | orchestrator | Wednesday 14 May 2025 14:30:43 +0000 (0:00:01.444) 0:00:07.390 ********* 2025-05-14 14:31:41.055943 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-14 14:31:41.055955 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.055965 | orchestrator | 2025-05-14 14:31:41.055976 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-14 14:31:41.055987 | orchestrator | Wednesday 14 May 2025 14:31:08 +0000 (0:00:24.436) 0:00:31.827 ********* 2025-05-14 14:31:41.055998 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056009 | orchestrator | 2025-05-14 14:31:41.056020 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:31:41.056030 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.056042 | orchestrator | 2025-05-14 14:31:41.056053 | orchestrator | Wednesday 14 May 2025 14:31:10 +0000 (0:00:02.578) 0:00:34.406 ********* 2025-05-14 14:31:41.056064 | orchestrator | =============================================================================== 2025-05-14 14:31:41.056074 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.44s 2025-05-14 14:31:41.056087 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.00s 2025-05-14 14:31:41.056099 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.58s 2025-05-14 14:31:41.056111 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.44s 2025-05-14 14:31:41.056124 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.37s 2025-05-14 14:31:41.056137 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.99s 2025-05-14 14:31:41.056149 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.24s 2025-05-14 14:31:41.056162 | orchestrator | 2025-05-14 14:31:41.056174 | orchestrator | 2025-05-14 14:31:41.056186 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-14 14:31:41.056198 | orchestrator | 2025-05-14 14:31:41.056210 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-14 14:31:41.056222 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:00.376) 0:00:00.376 ********* 2025-05-14 14:31:41.056235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-14 14:31:41.056248 | orchestrator | 2025-05-14 14:31:41.056260 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-14 14:31:41.056272 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:00.372) 0:00:00.749 ********* 2025-05-14 14:31:41.056284 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-14 14:31:41.056296 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-14 14:31:41.056309 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-14 14:31:41.056321 | orchestrator | 2025-05-14 14:31:41.056333 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-14 14:31:41.056345 | orchestrator | Wednesday 14 May 2025 14:30:39 +0000 (0:00:01.311) 0:00:02.060 ********* 2025-05-14 14:31:41.056383 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056395 | orchestrator | 2025-05-14 14:31:41.056408 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-14 14:31:41.056420 | orchestrator | Wednesday 14 May 2025 14:30:40 +0000 (0:00:01.645) 0:00:03.706 ********* 2025-05-14 14:31:41.056433 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-14 14:31:41.056444 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.056455 | orchestrator | 2025-05-14 14:31:41.056478 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-14 14:31:41.056490 | orchestrator | Wednesday 14 May 2025 14:31:17 +0000 (0:00:37.205) 0:00:40.912 ********* 2025-05-14 14:31:41.056511 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056532 | orchestrator | 2025-05-14 14:31:41.056551 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-14 14:31:41.056572 | orchestrator | Wednesday 14 May 2025 14:31:19 +0000 (0:00:01.706) 0:00:42.618 ********* 2025-05-14 14:31:41.056590 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.056607 | orchestrator | 2025-05-14 14:31:41.056618 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-14 14:31:41.056629 | orchestrator | Wednesday 14 May 2025 14:31:21 +0000 (0:00:01.530) 0:00:44.149 ********* 2025-05-14 14:31:41.056640 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056651 | orchestrator | 2025-05-14 14:31:41.056662 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-14 14:31:41.056673 | orchestrator | Wednesday 14 May 2025 14:31:23 +0000 (0:00:02.727) 0:00:46.877 ********* 2025-05-14 14:31:41.056684 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056695 | orchestrator | 2025-05-14 14:31:41.056706 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-14 14:31:41.056722 | orchestrator | Wednesday 14 May 2025 14:31:24 +0000 (0:00:00.903) 0:00:47.780 ********* 2025-05-14 14:31:41.056733 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.056744 | orchestrator | 2025-05-14 14:31:41.056755 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-14 14:31:41.056766 | orchestrator | Wednesday 14 May 2025 14:31:25 +0000 (0:00:00.712) 0:00:48.492 ********* 2025-05-14 14:31:41.056777 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.056789 | orchestrator | 2025-05-14 14:31:41.056800 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:31:41.056811 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.056822 | orchestrator | 2025-05-14 14:31:41.056833 | orchestrator | Wednesday 14 May 2025 14:31:25 +0000 (0:00:00.336) 0:00:48.829 ********* 2025-05-14 14:31:41.056844 | orchestrator | =============================================================================== 2025-05-14 14:31:41.056855 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.21s 2025-05-14 14:31:41.056866 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.73s 2025-05-14 14:31:41.056876 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.71s 2025-05-14 14:31:41.056887 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.65s 2025-05-14 14:31:41.056898 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.53s 2025-05-14 14:31:41.056909 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.31s 2025-05-14 14:31:41.056919 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.90s 2025-05-14 14:31:41.056930 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2025-05-14 14:31:41.056941 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2025-05-14 14:31:41.056953 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.34s 2025-05-14 14:31:41.056963 | orchestrator | 2025-05-14 14:31:41.056974 | orchestrator | 2025-05-14 14:31:41.056985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:31:41.056996 | orchestrator | 2025-05-14 14:31:41.057007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:31:41.057018 | orchestrator | Wednesday 14 May 2025 14:30:36 +0000 (0:00:00.145) 0:00:00.145 ********* 2025-05-14 14:31:41.057029 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-14 14:31:41.057040 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-14 14:31:41.057051 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-14 14:31:41.057061 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-14 14:31:41.057078 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-14 14:31:41.057089 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-14 14:31:41.057100 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-14 14:31:41.057111 | orchestrator | 2025-05-14 14:31:41.057122 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-14 14:31:41.057133 | orchestrator | 2025-05-14 14:31:41.057144 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-14 14:31:41.057155 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:01.114) 0:00:01.260 ********* 2025-05-14 14:31:41.057179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:31:41.057192 | orchestrator | 2025-05-14 14:31:41.057203 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-14 14:31:41.057214 | orchestrator | Wednesday 14 May 2025 14:30:39 +0000 (0:00:01.336) 0:00:02.596 ********* 2025-05-14 14:31:41.057225 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.057236 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:31:41.057247 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:31:41.057258 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:31:41.057269 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:31:41.057280 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:31:41.057291 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:31:41.057302 | orchestrator | 2025-05-14 14:31:41.057313 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-14 14:31:41.057331 | orchestrator | Wednesday 14 May 2025 14:30:41 +0000 (0:00:02.000) 0:00:04.597 ********* 2025-05-14 14:31:41.057343 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:31:41.057377 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.057390 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:31:41.057401 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:31:41.057412 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:31:41.057423 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:31:41.057433 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:31:41.057444 | orchestrator | 2025-05-14 14:31:41.057455 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-14 14:31:41.057465 | orchestrator | Wednesday 14 May 2025 14:30:44 +0000 (0:00:03.436) 0:00:08.034 ********* 2025-05-14 14:31:41.057476 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.057487 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:31:41.057498 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:31:41.057508 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:31:41.057519 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:31:41.057529 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:31:41.057543 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:31:41.057562 | orchestrator | 2025-05-14 14:31:41.057582 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-14 14:31:41.057602 | orchestrator | Wednesday 14 May 2025 14:30:47 +0000 (0:00:02.467) 0:00:10.501 ********* 2025-05-14 14:31:41.057629 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.057647 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:31:41.057659 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:31:41.057670 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:31:41.057681 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:31:41.057691 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:31:41.057702 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:31:41.057712 | orchestrator | 2025-05-14 14:31:41.057723 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-14 14:31:41.057734 | orchestrator | Wednesday 14 May 2025 14:30:57 +0000 (0:00:09.988) 0:00:20.490 ********* 2025-05-14 14:31:41.057745 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:31:41.057767 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:31:41.057778 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:31:41.057788 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:31:41.057799 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:31:41.057809 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:31:41.057820 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.057831 | orchestrator | 2025-05-14 14:31:41.057842 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-14 14:31:41.057853 | orchestrator | Wednesday 14 May 2025 14:31:16 +0000 (0:00:18.778) 0:00:39.269 ********* 2025-05-14 14:31:41.057864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:31:41.057876 | orchestrator | 2025-05-14 14:31:41.057887 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-14 14:31:41.057897 | orchestrator | Wednesday 14 May 2025 14:31:18 +0000 (0:00:02.443) 0:00:41.712 ********* 2025-05-14 14:31:41.057908 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-14 14:31:41.057918 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-14 14:31:41.057929 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-14 14:31:41.057940 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-14 14:31:41.057950 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-14 14:31:41.057961 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-14 14:31:41.057971 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-14 14:31:41.057982 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-14 14:31:41.057993 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-14 14:31:41.058004 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-14 14:31:41.058083 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-14 14:31:41.058098 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-14 14:31:41.058109 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-14 14:31:41.058121 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-14 14:31:41.058132 | orchestrator | 2025-05-14 14:31:41.058143 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-14 14:31:41.058154 | orchestrator | Wednesday 14 May 2025 14:31:24 +0000 (0:00:06.506) 0:00:48.219 ********* 2025-05-14 14:31:41.058165 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.058176 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:31:41.058187 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:31:41.058198 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:31:41.058209 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:31:41.058219 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:31:41.058230 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:31:41.058241 | orchestrator | 2025-05-14 14:31:41.058252 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-14 14:31:41.058263 | orchestrator | Wednesday 14 May 2025 14:31:26 +0000 (0:00:01.753) 0:00:49.972 ********* 2025-05-14 14:31:41.058274 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.058285 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:31:41.058296 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:31:41.058306 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:31:41.058317 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:31:41.058328 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:31:41.058339 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:31:41.058350 | orchestrator | 2025-05-14 14:31:41.058428 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-14 14:31:41.058441 | orchestrator | Wednesday 14 May 2025 14:31:28 +0000 (0:00:02.186) 0:00:52.159 ********* 2025-05-14 14:31:41.058451 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.058472 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:31:41.058482 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:31:41.058493 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:31:41.058513 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:31:41.058524 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:31:41.058535 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:31:41.058545 | orchestrator | 2025-05-14 14:31:41.058556 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-14 14:31:41.058567 | orchestrator | Wednesday 14 May 2025 14:31:30 +0000 (0:00:01.454) 0:00:53.613 ********* 2025-05-14 14:31:41.058578 | orchestrator | ok: [testbed-manager] 2025-05-14 14:31:41.058594 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:31:41.058614 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:31:41.058636 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:31:41.058656 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:31:41.058667 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:31:41.058678 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:31:41.058689 | orchestrator | 2025-05-14 14:31:41.058699 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-14 14:31:41.058710 | orchestrator | Wednesday 14 May 2025 14:31:32 +0000 (0:00:02.377) 0:00:55.990 ********* 2025-05-14 14:31:41.058721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-14 14:31:41.058739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:31:41.058750 | orchestrator | 2025-05-14 14:31:41.058761 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-14 14:31:41.058772 | orchestrator | Wednesday 14 May 2025 14:31:34 +0000 (0:00:01.631) 0:00:57.622 ********* 2025-05-14 14:31:41.058783 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.058794 | orchestrator | 2025-05-14 14:31:41.058805 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-14 14:31:41.058816 | orchestrator | Wednesday 14 May 2025 14:31:35 +0000 (0:00:01.556) 0:00:59.179 ********* 2025-05-14 14:31:41.058827 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:31:41.058838 | orchestrator | changed: [testbed-manager] 2025-05-14 14:31:41.058848 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:31:41.058859 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:31:41.058869 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:31:41.058880 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:31:41.058890 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:31:41.058901 | orchestrator | 2025-05-14 14:31:41.058912 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:31:41.058922 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058934 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058944 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058953 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058963 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058972 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058982 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:31:41.058998 | orchestrator | 2025-05-14 14:31:41.059008 | orchestrator | Wednesday 14 May 2025 14:31:38 +0000 (0:00:02.664) 0:01:01.843 ********* 2025-05-14 14:31:41.059017 | orchestrator | =============================================================================== 2025-05-14 14:31:41.059027 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.78s 2025-05-14 14:31:41.059036 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.99s 2025-05-14 14:31:41.059045 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.51s 2025-05-14 14:31:41.059055 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.44s 2025-05-14 14:31:41.059064 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.66s 2025-05-14 14:31:41.059074 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.47s 2025-05-14 14:31:41.059083 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.44s 2025-05-14 14:31:41.059093 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.38s 2025-05-14 14:31:41.059102 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.19s 2025-05-14 14:31:41.059112 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.00s 2025-05-14 14:31:41.059121 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.75s 2025-05-14 14:31:41.059131 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.63s 2025-05-14 14:31:41.059140 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.56s 2025-05-14 14:31:41.059149 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.45s 2025-05-14 14:31:41.059165 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.34s 2025-05-14 14:31:41.059175 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.11s 2025-05-14 14:31:41.059185 | orchestrator | 2025-05-14 14:31:41 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:41.062713 | orchestrator | 2025-05-14 14:31:41 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:41.062935 | orchestrator | 2025-05-14 14:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:44.116754 | orchestrator | 2025-05-14 14:31:44 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:44.116853 | orchestrator | 2025-05-14 14:31:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:44.116868 | orchestrator | 2025-05-14 14:31:44 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:44.116879 | orchestrator | 2025-05-14 14:31:44 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:44.116891 | orchestrator | 2025-05-14 14:31:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:47.151764 | orchestrator | 2025-05-14 14:31:47 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:47.151891 | orchestrator | 2025-05-14 14:31:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:47.155893 | orchestrator | 2025-05-14 14:31:47 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:47.155959 | orchestrator | 2025-05-14 14:31:47 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:47.155979 | orchestrator | 2025-05-14 14:31:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:50.186882 | orchestrator | 2025-05-14 14:31:50 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state STARTED 2025-05-14 14:31:50.188179 | orchestrator | 2025-05-14 14:31:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:50.191124 | orchestrator | 2025-05-14 14:31:50 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:50.191163 | orchestrator | 2025-05-14 14:31:50 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:50.191185 | orchestrator | 2025-05-14 14:31:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:53.235702 | orchestrator | 2025-05-14 14:31:53 | INFO  | Task ffb6b045-47fb-44d9-8054-81b5fb682def is in state SUCCESS 2025-05-14 14:31:53.235861 | orchestrator | 2025-05-14 14:31:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:53.236280 | orchestrator | 2025-05-14 14:31:53 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:53.239045 | orchestrator | 2025-05-14 14:31:53 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:53.239080 | orchestrator | 2025-05-14 14:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:56.275244 | orchestrator | 2025-05-14 14:31:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:56.275518 | orchestrator | 2025-05-14 14:31:56 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:56.277447 | orchestrator | 2025-05-14 14:31:56 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:56.277483 | orchestrator | 2025-05-14 14:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:31:59.321968 | orchestrator | 2025-05-14 14:31:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:31:59.322622 | orchestrator | 2025-05-14 14:31:59 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:31:59.323565 | orchestrator | 2025-05-14 14:31:59 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:31:59.323641 | orchestrator | 2025-05-14 14:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:02.373721 | orchestrator | 2025-05-14 14:32:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:02.374197 | orchestrator | 2025-05-14 14:32:02 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:02.375583 | orchestrator | 2025-05-14 14:32:02 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:02.376318 | orchestrator | 2025-05-14 14:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:05.442311 | orchestrator | 2025-05-14 14:32:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:05.443814 | orchestrator | 2025-05-14 14:32:05 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:05.448320 | orchestrator | 2025-05-14 14:32:05 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:05.448376 | orchestrator | 2025-05-14 14:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:08.489713 | orchestrator | 2025-05-14 14:32:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:08.490992 | orchestrator | 2025-05-14 14:32:08 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:08.492819 | orchestrator | 2025-05-14 14:32:08 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:08.492860 | orchestrator | 2025-05-14 14:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:11.556847 | orchestrator | 2025-05-14 14:32:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:11.561754 | orchestrator | 2025-05-14 14:32:11 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:11.561929 | orchestrator | 2025-05-14 14:32:11 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:11.561951 | orchestrator | 2025-05-14 14:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:14.603530 | orchestrator | 2025-05-14 14:32:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:14.603685 | orchestrator | 2025-05-14 14:32:14 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:14.605166 | orchestrator | 2025-05-14 14:32:14 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:14.605182 | orchestrator | 2025-05-14 14:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:17.651861 | orchestrator | 2025-05-14 14:32:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:17.654620 | orchestrator | 2025-05-14 14:32:17 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:17.656058 | orchestrator | 2025-05-14 14:32:17 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:17.656136 | orchestrator | 2025-05-14 14:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:20.706870 | orchestrator | 2025-05-14 14:32:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:20.708192 | orchestrator | 2025-05-14 14:32:20 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:20.711379 | orchestrator | 2025-05-14 14:32:20 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:20.711460 | orchestrator | 2025-05-14 14:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:23.770104 | orchestrator | 2025-05-14 14:32:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:23.770668 | orchestrator | 2025-05-14 14:32:23 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:23.771827 | orchestrator | 2025-05-14 14:32:23 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:23.771846 | orchestrator | 2025-05-14 14:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:26.827220 | orchestrator | 2025-05-14 14:32:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:26.828377 | orchestrator | 2025-05-14 14:32:26 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:26.830671 | orchestrator | 2025-05-14 14:32:26 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:26.830700 | orchestrator | 2025-05-14 14:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:29.880005 | orchestrator | 2025-05-14 14:32:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:29.881814 | orchestrator | 2025-05-14 14:32:29 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:29.882940 | orchestrator | 2025-05-14 14:32:29 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:29.883047 | orchestrator | 2025-05-14 14:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:32.928145 | orchestrator | 2025-05-14 14:32:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:32.928329 | orchestrator | 2025-05-14 14:32:32 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:32.930187 | orchestrator | 2025-05-14 14:32:32 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:32.930231 | orchestrator | 2025-05-14 14:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:35.979602 | orchestrator | 2025-05-14 14:32:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:35.979739 | orchestrator | 2025-05-14 14:32:35 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:35.979979 | orchestrator | 2025-05-14 14:32:35 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:35.980208 | orchestrator | 2025-05-14 14:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:39.024778 | orchestrator | 2025-05-14 14:32:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:39.025489 | orchestrator | 2025-05-14 14:32:39 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:39.027705 | orchestrator | 2025-05-14 14:32:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:39.027725 | orchestrator | 2025-05-14 14:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:42.068751 | orchestrator | 2025-05-14 14:32:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:42.069203 | orchestrator | 2025-05-14 14:32:42 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:42.070160 | orchestrator | 2025-05-14 14:32:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:42.070183 | orchestrator | 2025-05-14 14:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:45.127886 | orchestrator | 2025-05-14 14:32:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:45.128064 | orchestrator | 2025-05-14 14:32:45 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:45.129023 | orchestrator | 2025-05-14 14:32:45 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:45.129244 | orchestrator | 2025-05-14 14:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:48.196055 | orchestrator | 2025-05-14 14:32:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:48.197044 | orchestrator | 2025-05-14 14:32:48 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:48.198471 | orchestrator | 2025-05-14 14:32:48 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:48.198513 | orchestrator | 2025-05-14 14:32:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:51.245502 | orchestrator | 2025-05-14 14:32:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:51.247613 | orchestrator | 2025-05-14 14:32:51 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state STARTED 2025-05-14 14:32:51.247666 | orchestrator | 2025-05-14 14:32:51 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:51.247681 | orchestrator | 2025-05-14 14:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:54.293401 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:54.294138 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:32:54.295054 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:32:54.296768 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 6154cb2a-95e9-46ec-8051-3820fcca82c8 is in state SUCCESS 2025-05-14 14:32:54.299829 | orchestrator | 2025-05-14 14:32:54.299879 | orchestrator | 2025-05-14 14:32:54.299891 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-14 14:32:54.299903 | orchestrator | 2025-05-14 14:32:54.299914 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-14 14:32:54.299926 | orchestrator | Wednesday 14 May 2025 14:30:52 +0000 (0:00:00.245) 0:00:00.245 ********* 2025-05-14 14:32:54.299937 | orchestrator | ok: [testbed-manager] 2025-05-14 14:32:54.299948 | orchestrator | 2025-05-14 14:32:54.299959 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-14 14:32:54.299972 | orchestrator | Wednesday 14 May 2025 14:30:53 +0000 (0:00:00.817) 0:00:01.063 ********* 2025-05-14 14:32:54.299992 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-14 14:32:54.300004 | orchestrator | 2025-05-14 14:32:54.300015 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-14 14:32:54.300026 | orchestrator | Wednesday 14 May 2025 14:30:53 +0000 (0:00:00.552) 0:00:01.615 ********* 2025-05-14 14:32:54.300037 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.300047 | orchestrator | 2025-05-14 14:32:54.300058 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-14 14:32:54.300069 | orchestrator | Wednesday 14 May 2025 14:30:55 +0000 (0:00:01.218) 0:00:02.834 ********* 2025-05-14 14:32:54.300080 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-14 14:32:54.300090 | orchestrator | ok: [testbed-manager] 2025-05-14 14:32:54.300101 | orchestrator | 2025-05-14 14:32:54.300129 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-14 14:32:54.300141 | orchestrator | Wednesday 14 May 2025 14:31:47 +0000 (0:00:52.339) 0:00:55.173 ********* 2025-05-14 14:32:54.300157 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.300176 | orchestrator | 2025-05-14 14:32:54.300193 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:32:54.300222 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:32:54.300242 | orchestrator | 2025-05-14 14:32:54.300260 | orchestrator | Wednesday 14 May 2025 14:31:50 +0000 (0:00:03.292) 0:00:58.465 ********* 2025-05-14 14:32:54.300278 | orchestrator | =============================================================================== 2025-05-14 14:32:54.300296 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.34s 2025-05-14 14:32:54.300313 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.29s 2025-05-14 14:32:54.300368 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.22s 2025-05-14 14:32:54.300387 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.82s 2025-05-14 14:32:54.300405 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2025-05-14 14:32:54.300424 | orchestrator | 2025-05-14 14:32:54.300443 | orchestrator | 2025-05-14 14:32:54.300463 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-14 14:32:54.300539 | orchestrator | 2025-05-14 14:32:54.300553 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 14:32:54.300565 | orchestrator | Wednesday 14 May 2025 14:30:32 +0000 (0:00:00.286) 0:00:00.286 ********* 2025-05-14 14:32:54.300578 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:32:54.300609 | orchestrator | 2025-05-14 14:32:54.300621 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-14 14:32:54.300633 | orchestrator | Wednesday 14 May 2025 14:30:34 +0000 (0:00:01.301) 0:00:01.588 ********* 2025-05-14 14:32:54.300645 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300657 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300669 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300681 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300693 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300706 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300718 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300730 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300742 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300754 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300767 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300778 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300788 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 14:32:54.300799 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300810 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300821 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300831 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300858 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 14:32:54.300870 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300880 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300891 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 14:32:54.300902 | orchestrator | 2025-05-14 14:32:54.300913 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 14:32:54.300924 | orchestrator | Wednesday 14 May 2025 14:30:37 +0000 (0:00:03.682) 0:00:05.270 ********* 2025-05-14 14:32:54.300934 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:32:54.300947 | orchestrator | 2025-05-14 14:32:54.300957 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-14 14:32:54.300968 | orchestrator | Wednesday 14 May 2025 14:30:39 +0000 (0:00:01.567) 0:00:06.838 ********* 2025-05-14 14:32:54.300991 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.301105 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301213 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.301309 | orchestrator | 2025-05-14 14:32:54.301321 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-14 14:32:54.301376 | orchestrator | Wednesday 14 May 2025 14:30:44 +0000 (0:00:04.668) 0:00:11.507 ********* 2025-05-14 14:32:54.301397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301429 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301440 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.301459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301497 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.301517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301624 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.301635 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.301646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301680 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.301698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301756 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.301768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301821 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.301841 | orchestrator | 2025-05-14 14:32:54.301861 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-14 14:32:54.301881 | orchestrator | Wednesday 14 May 2025 14:30:46 +0000 (0:00:02.079) 0:00:13.587 ********* 2025-05-14 14:32:54.301896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.301966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.301989 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.302000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.302012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302512 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.302523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.302533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302559 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.302569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.302580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302600 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.302609 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.302625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.302644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302665 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.302679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 14:32:54.302690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.302710 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.302720 | orchestrator | 2025-05-14 14:32:54.302730 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-14 14:32:54.302739 | orchestrator | Wednesday 14 May 2025 14:30:49 +0000 (0:00:02.834) 0:00:16.421 ********* 2025-05-14 14:32:54.302749 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.302759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.302768 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.302778 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.302788 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.302803 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.302813 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.302823 | orchestrator | 2025-05-14 14:32:54.302832 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-14 14:32:54.302842 | orchestrator | Wednesday 14 May 2025 14:30:50 +0000 (0:00:00.985) 0:00:17.407 ********* 2025-05-14 14:32:54.302851 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.302861 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.302870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.302880 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.302889 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.302899 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.302908 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.302918 | orchestrator | 2025-05-14 14:32:54.302928 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-14 14:32:54.302937 | orchestrator | Wednesday 14 May 2025 14:30:50 +0000 (0:00:00.857) 0:00:18.264 ********* 2025-05-14 14:32:54.302953 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:32:54.302965 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.302975 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.302984 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.302994 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.303003 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.303013 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.303022 | orchestrator | 2025-05-14 14:32:54.303032 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-14 14:32:54.303048 | orchestrator | Wednesday 14 May 2025 14:31:26 +0000 (0:00:35.875) 0:00:54.140 ********* 2025-05-14 14:32:54.303065 | orchestrator | ok: [testbed-manager] 2025-05-14 14:32:54.303089 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:32:54.303101 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:32:54.303119 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:32:54.303136 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:32:54.303153 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:32:54.303163 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:32:54.303172 | orchestrator | 2025-05-14 14:32:54.303182 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 14:32:54.303192 | orchestrator | Wednesday 14 May 2025 14:31:29 +0000 (0:00:02.472) 0:00:56.612 ********* 2025-05-14 14:32:54.303201 | orchestrator | ok: [testbed-manager] 2025-05-14 14:32:54.303211 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:32:54.303220 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:32:54.303230 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:32:54.303239 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:32:54.303255 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:32:54.303273 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:32:54.303287 | orchestrator | 2025-05-14 14:32:54.303297 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-14 14:32:54.303307 | orchestrator | Wednesday 14 May 2025 14:31:30 +0000 (0:00:00.977) 0:00:57.590 ********* 2025-05-14 14:32:54.303317 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.303353 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.303364 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.303374 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.303383 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.303393 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.303402 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.303412 | orchestrator | 2025-05-14 14:32:54.303422 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 14:32:54.303432 | orchestrator | Wednesday 14 May 2025 14:31:31 +0000 (0:00:00.909) 0:00:58.499 ********* 2025-05-14 14:32:54.303441 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:32:54.303451 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:32:54.303460 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:32:54.303486 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:32:54.303496 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:32:54.303521 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:32:54.303532 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:32:54.303541 | orchestrator | 2025-05-14 14:32:54.303551 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-14 14:32:54.303561 | orchestrator | Wednesday 14 May 2025 14:31:31 +0000 (0:00:00.863) 0:00:59.362 ********* 2025-05-14 14:32:54.303571 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303592 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303643 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.303736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.303875 | orchestrator | 2025-05-14 14:32:54.303893 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-14 14:32:54.303910 | orchestrator | Wednesday 14 May 2025 14:31:36 +0000 (0:00:04.494) 0:01:03.856 ********* 2025-05-14 14:32:54.303928 | orchestrator | [WARNING]: Skipped 2025-05-14 14:32:54.303940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-14 14:32:54.303950 | orchestrator | to this access issue: 2025-05-14 14:32:54.303960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-14 14:32:54.303970 | orchestrator | directory 2025-05-14 14:32:54.303979 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:32:54.303989 | orchestrator | 2025-05-14 14:32:54.303998 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-14 14:32:54.304008 | orchestrator | Wednesday 14 May 2025 14:31:37 +0000 (0:00:00.905) 0:01:04.762 ********* 2025-05-14 14:32:54.304017 | orchestrator | [WARNING]: Skipped 2025-05-14 14:32:54.304027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-14 14:32:54.304036 | orchestrator | to this access issue: 2025-05-14 14:32:54.304050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-14 14:32:54.304060 | orchestrator | directory 2025-05-14 14:32:54.304070 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:32:54.304079 | orchestrator | 2025-05-14 14:32:54.304095 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-14 14:32:54.304111 | orchestrator | Wednesday 14 May 2025 14:31:38 +0000 (0:00:00.767) 0:01:05.529 ********* 2025-05-14 14:32:54.304121 | orchestrator | [WARNING]: Skipped 2025-05-14 14:32:54.304131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-14 14:32:54.304140 | orchestrator | to this access issue: 2025-05-14 14:32:54.304149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-14 14:32:54.304159 | orchestrator | directory 2025-05-14 14:32:54.304168 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:32:54.304178 | orchestrator | 2025-05-14 14:32:54.304188 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-14 14:32:54.304197 | orchestrator | Wednesday 14 May 2025 14:31:38 +0000 (0:00:00.497) 0:01:06.026 ********* 2025-05-14 14:32:54.304207 | orchestrator | [WARNING]: Skipped 2025-05-14 14:32:54.304216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-14 14:32:54.304226 | orchestrator | to this access issue: 2025-05-14 14:32:54.304235 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-14 14:32:54.304245 | orchestrator | directory 2025-05-14 14:32:54.304254 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:32:54.304263 | orchestrator | 2025-05-14 14:32:54.304273 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-14 14:32:54.304282 | orchestrator | Wednesday 14 May 2025 14:31:39 +0000 (0:00:00.468) 0:01:06.495 ********* 2025-05-14 14:32:54.304292 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.304301 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.304310 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.304320 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.304350 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.304360 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.304370 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.304379 | orchestrator | 2025-05-14 14:32:54.304389 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-14 14:32:54.304399 | orchestrator | Wednesday 14 May 2025 14:31:43 +0000 (0:00:04.045) 0:01:10.540 ********* 2025-05-14 14:32:54.304408 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304454 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304464 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304473 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 14:32:54.304483 | orchestrator | 2025-05-14 14:32:54.304492 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-14 14:32:54.304502 | orchestrator | Wednesday 14 May 2025 14:31:45 +0000 (0:00:02.446) 0:01:12.986 ********* 2025-05-14 14:32:54.304511 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.304521 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.304530 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.304540 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.304549 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.304569 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.304588 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.304604 | orchestrator | 2025-05-14 14:32:54.304616 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-14 14:32:54.304633 | orchestrator | Wednesday 14 May 2025 14:31:48 +0000 (0:00:02.485) 0:01:15.472 ********* 2025-05-14 14:32:54.304653 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304777 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.304791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304821 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.304831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304857 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.304867 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304896 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.304938 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.304984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:32:54.304997 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305014 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305025 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305035 | orchestrator | 2025-05-14 14:32:54.305044 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-14 14:32:54.305054 | orchestrator | Wednesday 14 May 2025 14:31:50 +0000 (0:00:01.969) 0:01:17.442 ********* 2025-05-14 14:32:54.305064 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305073 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305093 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305102 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305112 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305122 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 14:32:54.305131 | orchestrator | 2025-05-14 14:32:54.305145 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-14 14:32:54.305172 | orchestrator | Wednesday 14 May 2025 14:31:52 +0000 (0:00:02.776) 0:01:20.218 ********* 2025-05-14 14:32:54.305183 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305193 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305202 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305235 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305245 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305255 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 14:32:54.305264 | orchestrator | 2025-05-14 14:32:54.305274 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-14 14:32:54.305283 | orchestrator | Wednesday 14 May 2025 14:31:55 +0000 (0:00:02.535) 0:01:22.754 ********* 2025-05-14 14:32:54.305300 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305370 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 14:32:54.305475 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:32:54.305588 | orchestrator | 2025-05-14 14:32:54.305598 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-14 14:32:54.305608 | orchestrator | Wednesday 14 May 2025 14:31:58 +0000 (0:00:03.270) 0:01:26.025 ********* 2025-05-14 14:32:54.305618 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.305633 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.305643 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.305652 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.305662 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.305671 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.305681 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.305690 | orchestrator | 2025-05-14 14:32:54.305700 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-14 14:32:54.305709 | orchestrator | Wednesday 14 May 2025 14:32:00 +0000 (0:00:01.951) 0:01:27.976 ********* 2025-05-14 14:32:54.305725 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.305734 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.305744 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.305784 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.305802 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.305811 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.305821 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.305830 | orchestrator | 2025-05-14 14:32:54.305840 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.305849 | orchestrator | Wednesday 14 May 2025 14:32:01 +0000 (0:00:01.382) 0:01:29.359 ********* 2025-05-14 14:32:54.305859 | orchestrator | 2025-05-14 14:32:54.305868 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.305878 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.060) 0:01:29.419 ********* 2025-05-14 14:32:54.305887 | orchestrator | 2025-05-14 14:32:54.305897 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.305906 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.055) 0:01:29.474 ********* 2025-05-14 14:32:54.305919 | orchestrator | 2025-05-14 14:32:54.305936 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.305947 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.054) 0:01:29.528 ********* 2025-05-14 14:32:54.305957 | orchestrator | 2025-05-14 14:32:54.305970 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.305980 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.246) 0:01:29.774 ********* 2025-05-14 14:32:54.305990 | orchestrator | 2025-05-14 14:32:54.306000 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.306009 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.081) 0:01:29.856 ********* 2025-05-14 14:32:54.306055 | orchestrator | 2025-05-14 14:32:54.306066 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 14:32:54.306076 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.058) 0:01:29.914 ********* 2025-05-14 14:32:54.306085 | orchestrator | 2025-05-14 14:32:54.306095 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-14 14:32:54.306104 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.070) 0:01:29.984 ********* 2025-05-14 14:32:54.306114 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.306123 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.306133 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.306143 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.306152 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.306162 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.306171 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.306181 | orchestrator | 2025-05-14 14:32:54.306191 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-14 14:32:54.306200 | orchestrator | Wednesday 14 May 2025 14:32:11 +0000 (0:00:08.842) 0:01:38.826 ********* 2025-05-14 14:32:54.306210 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.306219 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.306229 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.306238 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.306248 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.306257 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.306267 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.306277 | orchestrator | 2025-05-14 14:32:54.306286 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-14 14:32:54.306296 | orchestrator | Wednesday 14 May 2025 14:32:39 +0000 (0:00:28.082) 0:02:06.909 ********* 2025-05-14 14:32:54.306306 | orchestrator | ok: [testbed-manager] 2025-05-14 14:32:54.306316 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:32:54.306519 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:32:54.306692 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:32:54.306711 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:32:54.306722 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:32:54.306738 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:32:54.306758 | orchestrator | 2025-05-14 14:32:54.306779 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-14 14:32:54.306793 | orchestrator | Wednesday 14 May 2025 14:32:41 +0000 (0:00:02.459) 0:02:09.368 ********* 2025-05-14 14:32:54.306804 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:32:54.306823 | orchestrator | changed: [testbed-manager] 2025-05-14 14:32:54.306835 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:32:54.306846 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:32:54.306856 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:32:54.306867 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:32:54.306878 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:32:54.306888 | orchestrator | 2025-05-14 14:32:54.306899 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:32:54.306911 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.306923 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.306934 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.306983 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.306997 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.307128 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.307181 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:32:54.307194 | orchestrator | 2025-05-14 14:32:54.307205 | orchestrator | 2025-05-14 14:32:54.307216 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:32:54.307227 | orchestrator | Wednesday 14 May 2025 14:32:51 +0000 (0:00:09.608) 0:02:18.977 ********* 2025-05-14 14:32:54.307238 | orchestrator | =============================================================================== 2025-05-14 14:32:54.307249 | orchestrator | common : Ensure fluentd image is present for label check --------------- 35.88s 2025-05-14 14:32:54.307259 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.08s 2025-05-14 14:32:54.307270 | orchestrator | common : Restart cron container ----------------------------------------- 9.61s 2025-05-14 14:32:54.307281 | orchestrator | common : Restart fluentd container -------------------------------------- 8.84s 2025-05-14 14:32:54.307292 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.67s 2025-05-14 14:32:54.307303 | orchestrator | common : Copying over config.json files for services -------------------- 4.49s 2025-05-14 14:32:54.307357 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.05s 2025-05-14 14:32:54.307371 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.68s 2025-05-14 14:32:54.307382 | orchestrator | common : Check common containers ---------------------------------------- 3.27s 2025-05-14 14:32:54.307392 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.83s 2025-05-14 14:32:54.307403 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.78s 2025-05-14 14:32:54.307414 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.54s 2025-05-14 14:32:54.307435 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.49s 2025-05-14 14:32:54.307446 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.47s 2025-05-14 14:32:54.307457 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.46s 2025-05-14 14:32:54.307468 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.45s 2025-05-14 14:32:54.307479 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.08s 2025-05-14 14:32:54.307491 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.97s 2025-05-14 14:32:54.307501 | orchestrator | common : Creating log volume -------------------------------------------- 1.95s 2025-05-14 14:32:54.307513 | orchestrator | common : include_tasks -------------------------------------------------- 1.57s 2025-05-14 14:32:54.307524 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:32:54.307535 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:32:54.307546 | orchestrator | 2025-05-14 14:32:54 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:54.307557 | orchestrator | 2025-05-14 14:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:32:57.327523 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:32:57.327901 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:32:57.328614 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:32:57.329209 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:32:57.329818 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:32:57.330376 | orchestrator | 2025-05-14 14:32:57 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:32:57.330590 | orchestrator | 2025-05-14 14:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:00.360122 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:00.360879 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:00.361943 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:33:00.363205 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:00.364390 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:00.364923 | orchestrator | 2025-05-14 14:33:00 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:00.365046 | orchestrator | 2025-05-14 14:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:03.393633 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:03.394439 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:03.395956 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:33:03.396420 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:03.397208 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:03.397669 | orchestrator | 2025-05-14 14:33:03 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:03.397693 | orchestrator | 2025-05-14 14:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:06.441561 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:06.441954 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:06.443409 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:33:06.444493 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:06.445108 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:06.446088 | orchestrator | 2025-05-14 14:33:06 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:06.446115 | orchestrator | 2025-05-14 14:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:09.476199 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:09.478309 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:09.479125 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state STARTED 2025-05-14 14:33:09.480024 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:09.482859 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:09.482878 | orchestrator | 2025-05-14 14:33:09 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:09.482888 | orchestrator | 2025-05-14 14:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:12.538940 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:12.539015 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:12.539029 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task 7c4850f6-ad79-44c5-90c4-38d86b6f8c65 is in state SUCCESS 2025-05-14 14:33:12.539040 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:12.539051 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:12.540747 | orchestrator | 2025-05-14 14:33:12 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:12.540792 | orchestrator | 2025-05-14 14:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:15.572438 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:15.575077 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:15.576213 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:15.576928 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:15.577847 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:15.578932 | orchestrator | 2025-05-14 14:33:15 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:15.578960 | orchestrator | 2025-05-14 14:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:18.602837 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:18.603273 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:18.603880 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:18.604560 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:18.605368 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:18.605959 | orchestrator | 2025-05-14 14:33:18 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:18.605984 | orchestrator | 2025-05-14 14:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:21.634210 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:21.635473 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:21.635953 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:21.636523 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:21.637701 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:21.638923 | orchestrator | 2025-05-14 14:33:21 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:21.638947 | orchestrator | 2025-05-14 14:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:24.671576 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:24.671886 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:24.672793 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:24.677106 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state STARTED 2025-05-14 14:33:24.677717 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:24.678912 | orchestrator | 2025-05-14 14:33:24 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:24.678937 | orchestrator | 2025-05-14 14:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:27.708151 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:27.708208 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:27.708796 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:27.709510 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task 5ac3a4c5-6561-4b4c-b9be-2326821dc50e is in state SUCCESS 2025-05-14 14:33:27.713575 | orchestrator | 2025-05-14 14:33:27.713621 | orchestrator | 2025-05-14 14:33:27.713634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:33:27.713646 | orchestrator | 2025-05-14 14:33:27.713657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:33:27.713669 | orchestrator | Wednesday 14 May 2025 14:32:55 +0000 (0:00:00.311) 0:00:00.311 ********* 2025-05-14 14:33:27.713680 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:33:27.713691 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:33:27.713702 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:33:27.713712 | orchestrator | 2025-05-14 14:33:27.713723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:33:27.713734 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.444) 0:00:00.756 ********* 2025-05-14 14:33:27.713745 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-14 14:33:27.713756 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-14 14:33:27.713767 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-14 14:33:27.713777 | orchestrator | 2025-05-14 14:33:27.713788 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-14 14:33:27.713799 | orchestrator | 2025-05-14 14:33:27.713809 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-14 14:33:27.713820 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.378) 0:00:01.134 ********* 2025-05-14 14:33:27.713831 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:33:27.713842 | orchestrator | 2025-05-14 14:33:27.713853 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-14 14:33:27.713864 | orchestrator | Wednesday 14 May 2025 14:32:57 +0000 (0:00:00.976) 0:00:02.110 ********* 2025-05-14 14:33:27.713874 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 14:33:27.713885 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 14:33:27.713896 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 14:33:27.713906 | orchestrator | 2025-05-14 14:33:27.713917 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-14 14:33:27.713928 | orchestrator | Wednesday 14 May 2025 14:32:58 +0000 (0:00:01.056) 0:00:03.167 ********* 2025-05-14 14:33:27.713938 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 14:33:27.713949 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 14:33:27.713960 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 14:33:27.713970 | orchestrator | 2025-05-14 14:33:27.713981 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-14 14:33:27.713992 | orchestrator | Wednesday 14 May 2025 14:33:00 +0000 (0:00:01.778) 0:00:04.946 ********* 2025-05-14 14:33:27.714002 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:33:27.714014 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:33:27.714074 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:33:27.714085 | orchestrator | 2025-05-14 14:33:27.714096 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-14 14:33:27.714119 | orchestrator | Wednesday 14 May 2025 14:33:02 +0000 (0:00:02.442) 0:00:07.388 ********* 2025-05-14 14:33:27.714130 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:33:27.714140 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:33:27.714151 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:33:27.714161 | orchestrator | 2025-05-14 14:33:27.714172 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:33:27.714183 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.714197 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.714223 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.714235 | orchestrator | 2025-05-14 14:33:27.714247 | orchestrator | 2025-05-14 14:33:27.714259 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:33:27.714271 | orchestrator | Wednesday 14 May 2025 14:33:11 +0000 (0:00:08.267) 0:00:15.656 ********* 2025-05-14 14:33:27.714283 | orchestrator | =============================================================================== 2025-05-14 14:33:27.714295 | orchestrator | memcached : Restart memcached container --------------------------------- 8.27s 2025-05-14 14:33:27.714327 | orchestrator | memcached : Check memcached container ----------------------------------- 2.44s 2025-05-14 14:33:27.714339 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.78s 2025-05-14 14:33:27.714351 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.06s 2025-05-14 14:33:27.714363 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.98s 2025-05-14 14:33:27.714375 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-14 14:33:27.714386 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-05-14 14:33:27.714398 | orchestrator | 2025-05-14 14:33:27.714409 | orchestrator | 2025-05-14 14:33:27.714421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:33:27.714433 | orchestrator | 2025-05-14 14:33:27.714445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:33:27.714457 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.336) 0:00:00.336 ********* 2025-05-14 14:33:27.714469 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:33:27.714480 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:33:27.714492 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:33:27.714504 | orchestrator | 2025-05-14 14:33:27.714516 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:33:27.714540 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.402) 0:00:00.739 ********* 2025-05-14 14:33:27.714552 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-14 14:33:27.714563 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-14 14:33:27.714574 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-14 14:33:27.714584 | orchestrator | 2025-05-14 14:33:27.714595 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-14 14:33:27.714606 | orchestrator | 2025-05-14 14:33:27.714616 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-14 14:33:27.714627 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.316) 0:00:01.055 ********* 2025-05-14 14:33:27.714637 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:33:27.714648 | orchestrator | 2025-05-14 14:33:27.714659 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-14 14:33:27.714669 | orchestrator | Wednesday 14 May 2025 14:32:57 +0000 (0:00:00.911) 0:00:01.967 ********* 2025-05-14 14:33:27.714683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714785 | orchestrator | 2025-05-14 14:33:27.714797 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-14 14:33:27.714808 | orchestrator | Wednesday 14 May 2025 14:32:59 +0000 (0:00:01.588) 0:00:03.556 ********* 2025-05-14 14:33:27.714819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714917 | orchestrator | 2025-05-14 14:33:27.714928 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-14 14:33:27.714938 | orchestrator | Wednesday 14 May 2025 14:33:01 +0000 (0:00:02.543) 0:00:06.099 ********* 2025-05-14 14:33:27.714950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.714996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715037 | orchestrator | 2025-05-14 14:33:27.715048 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-14 14:33:27.715058 | orchestrator | Wednesday 14 May 2025 14:33:05 +0000 (0:00:03.211) 0:00:09.311 ********* 2025-05-14 14:33:27.715069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 14:33:27.715157 | orchestrator | 2025-05-14 14:33:27.715167 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 14:33:27.715178 | orchestrator | Wednesday 14 May 2025 14:33:07 +0000 (0:00:02.725) 0:00:12.036 ********* 2025-05-14 14:33:27.715189 | orchestrator | 2025-05-14 14:33:27.715200 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 14:33:27.715211 | orchestrator | Wednesday 14 May 2025 14:33:07 +0000 (0:00:00.065) 0:00:12.102 ********* 2025-05-14 14:33:27.715229 | orchestrator | 2025-05-14 14:33:27.715240 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 14:33:27.715250 | orchestrator | Wednesday 14 May 2025 14:33:08 +0000 (0:00:00.126) 0:00:12.228 ********* 2025-05-14 14:33:27.715261 | orchestrator | 2025-05-14 14:33:27.715272 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-14 14:33:27.715283 | orchestrator | Wednesday 14 May 2025 14:33:08 +0000 (0:00:00.281) 0:00:12.510 ********* 2025-05-14 14:33:27.715293 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:33:27.715323 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:33:27.715335 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:33:27.715346 | orchestrator | 2025-05-14 14:33:27.715356 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-14 14:33:27.715367 | orchestrator | Wednesday 14 May 2025 14:33:18 +0000 (0:00:09.774) 0:00:22.284 ********* 2025-05-14 14:33:27.715378 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:33:27.715389 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:33:27.715400 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:33:27.715410 | orchestrator | 2025-05-14 14:33:27.715421 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:33:27.715432 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.715443 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.715454 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:33:27.715465 | orchestrator | 2025-05-14 14:33:27.715476 | orchestrator | 2025-05-14 14:33:27.715487 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:33:27.715497 | orchestrator | Wednesday 14 May 2025 14:33:25 +0000 (0:00:07.568) 0:00:29.853 ********* 2025-05-14 14:33:27.715508 | orchestrator | =============================================================================== 2025-05-14 14:33:27.715519 | orchestrator | redis : Restart redis container ----------------------------------------- 9.77s 2025-05-14 14:33:27.715529 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.57s 2025-05-14 14:33:27.715540 | orchestrator | redis : Copying over redis config files --------------------------------- 3.21s 2025-05-14 14:33:27.715551 | orchestrator | redis : Check redis containers ------------------------------------------ 2.73s 2025-05-14 14:33:27.715561 | orchestrator | redis : Copying over default config.json files -------------------------- 2.54s 2025-05-14 14:33:27.715572 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.59s 2025-05-14 14:33:27.715589 | orchestrator | redis : include_tasks --------------------------------------------------- 0.91s 2025-05-14 14:33:27.715600 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.47s 2025-05-14 14:33:27.715610 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-14 14:33:27.715621 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-05-14 14:33:27.715632 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:27.715643 | orchestrator | 2025-05-14 14:33:27 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:27.715654 | orchestrator | 2025-05-14 14:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:30.758788 | orchestrator | 2025-05-14 14:33:30 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:30.760357 | orchestrator | 2025-05-14 14:33:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:30.761746 | orchestrator | 2025-05-14 14:33:30 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:30.765360 | orchestrator | 2025-05-14 14:33:30 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:30.766360 | orchestrator | 2025-05-14 14:33:30 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:30.766385 | orchestrator | 2025-05-14 14:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:33.801924 | orchestrator | 2025-05-14 14:33:33 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:33.803865 | orchestrator | 2025-05-14 14:33:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:33.804017 | orchestrator | 2025-05-14 14:33:33 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:33.804766 | orchestrator | 2025-05-14 14:33:33 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:33.806394 | orchestrator | 2025-05-14 14:33:33 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:33.806480 | orchestrator | 2025-05-14 14:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:36.837493 | orchestrator | 2025-05-14 14:33:36 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:36.838362 | orchestrator | 2025-05-14 14:33:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:36.839322 | orchestrator | 2025-05-14 14:33:36 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:36.839924 | orchestrator | 2025-05-14 14:33:36 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:36.841681 | orchestrator | 2025-05-14 14:33:36 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:36.841704 | orchestrator | 2025-05-14 14:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:39.864607 | orchestrator | 2025-05-14 14:33:39 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:39.865174 | orchestrator | 2025-05-14 14:33:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:39.868155 | orchestrator | 2025-05-14 14:33:39 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:39.869159 | orchestrator | 2025-05-14 14:33:39 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:39.869196 | orchestrator | 2025-05-14 14:33:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:39.869911 | orchestrator | 2025-05-14 14:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:42.894281 | orchestrator | 2025-05-14 14:33:42 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:42.895015 | orchestrator | 2025-05-14 14:33:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:42.895059 | orchestrator | 2025-05-14 14:33:42 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:42.895683 | orchestrator | 2025-05-14 14:33:42 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:42.896559 | orchestrator | 2025-05-14 14:33:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:42.896579 | orchestrator | 2025-05-14 14:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:45.928152 | orchestrator | 2025-05-14 14:33:45 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:45.929590 | orchestrator | 2025-05-14 14:33:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:45.930938 | orchestrator | 2025-05-14 14:33:45 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:45.932608 | orchestrator | 2025-05-14 14:33:45 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:45.934660 | orchestrator | 2025-05-14 14:33:45 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:45.935508 | orchestrator | 2025-05-14 14:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:48.972067 | orchestrator | 2025-05-14 14:33:48 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:48.973115 | orchestrator | 2025-05-14 14:33:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:48.973697 | orchestrator | 2025-05-14 14:33:48 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:48.975728 | orchestrator | 2025-05-14 14:33:48 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:48.976788 | orchestrator | 2025-05-14 14:33:48 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:48.976826 | orchestrator | 2025-05-14 14:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:52.013060 | orchestrator | 2025-05-14 14:33:52 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:52.015564 | orchestrator | 2025-05-14 14:33:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:52.016490 | orchestrator | 2025-05-14 14:33:52 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:52.017523 | orchestrator | 2025-05-14 14:33:52 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:52.018408 | orchestrator | 2025-05-14 14:33:52 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:52.018450 | orchestrator | 2025-05-14 14:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:55.070162 | orchestrator | 2025-05-14 14:33:55 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:55.071213 | orchestrator | 2025-05-14 14:33:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:55.071955 | orchestrator | 2025-05-14 14:33:55 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:55.072781 | orchestrator | 2025-05-14 14:33:55 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:55.073714 | orchestrator | 2025-05-14 14:33:55 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:55.073754 | orchestrator | 2025-05-14 14:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:33:58.121539 | orchestrator | 2025-05-14 14:33:58 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:33:58.124866 | orchestrator | 2025-05-14 14:33:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:33:58.126110 | orchestrator | 2025-05-14 14:33:58 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:33:58.127615 | orchestrator | 2025-05-14 14:33:58 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:33:58.133353 | orchestrator | 2025-05-14 14:33:58 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:33:58.133427 | orchestrator | 2025-05-14 14:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:01.166371 | orchestrator | 2025-05-14 14:34:01 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:01.167364 | orchestrator | 2025-05-14 14:34:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:01.168822 | orchestrator | 2025-05-14 14:34:01 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:01.170065 | orchestrator | 2025-05-14 14:34:01 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:34:01.170948 | orchestrator | 2025-05-14 14:34:01 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:01.171225 | orchestrator | 2025-05-14 14:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:04.217793 | orchestrator | 2025-05-14 14:34:04 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:04.218785 | orchestrator | 2025-05-14 14:34:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:04.218819 | orchestrator | 2025-05-14 14:34:04 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:04.219697 | orchestrator | 2025-05-14 14:34:04 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state STARTED 2025-05-14 14:34:04.220381 | orchestrator | 2025-05-14 14:34:04 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:04.220406 | orchestrator | 2025-05-14 14:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:07.282549 | orchestrator | 2025-05-14 14:34:07 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:07.283140 | orchestrator | 2025-05-14 14:34:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:07.284408 | orchestrator | 2025-05-14 14:34:07 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:07.285694 | orchestrator | 2025-05-14 14:34:07 | INFO  | Task 327a6e62-17a0-4f39-82df-c703188c6882 is in state SUCCESS 2025-05-14 14:34:07.287800 | orchestrator | 2025-05-14 14:34:07.287952 | orchestrator | 2025-05-14 14:34:07.287981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:34:07.288001 | orchestrator | 2025-05-14 14:34:07.288021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:34:07.288041 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.260) 0:00:00.260 ********* 2025-05-14 14:34:07.288053 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:34:07.288065 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:34:07.288076 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:34:07.288087 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:34:07.288098 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:34:07.288108 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:34:07.288119 | orchestrator | 2025-05-14 14:34:07.288130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:34:07.288141 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.712) 0:00:00.973 ********* 2025-05-14 14:34:07.288152 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288163 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288179 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288190 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288201 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288236 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 14:34:07.288255 | orchestrator | 2025-05-14 14:34:07.288292 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-14 14:34:07.288304 | orchestrator | 2025-05-14 14:34:07.288315 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-14 14:34:07.288326 | orchestrator | Wednesday 14 May 2025 14:32:57 +0000 (0:00:00.836) 0:00:01.810 ********* 2025-05-14 14:34:07.288339 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:34:07.288351 | orchestrator | 2025-05-14 14:34:07.288362 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 14:34:07.288373 | orchestrator | Wednesday 14 May 2025 14:32:58 +0000 (0:00:01.286) 0:00:03.097 ********* 2025-05-14 14:34:07.288385 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 14:34:07.288399 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 14:34:07.288411 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 14:34:07.288424 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 14:34:07.288436 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 14:34:07.288448 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 14:34:07.288460 | orchestrator | 2025-05-14 14:34:07.288472 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 14:34:07.288484 | orchestrator | Wednesday 14 May 2025 14:33:00 +0000 (0:00:01.397) 0:00:04.494 ********* 2025-05-14 14:34:07.288496 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 14:34:07.288508 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 14:34:07.288520 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 14:34:07.288532 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 14:34:07.288552 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 14:34:07.288565 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 14:34:07.288577 | orchestrator | 2025-05-14 14:34:07.288590 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 14:34:07.288602 | orchestrator | Wednesday 14 May 2025 14:33:02 +0000 (0:00:02.216) 0:00:06.711 ********* 2025-05-14 14:34:07.288614 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-14 14:34:07.288625 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:34:07.288639 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-14 14:34:07.288651 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:34:07.288664 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-14 14:34:07.288676 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:34:07.288688 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-14 14:34:07.288700 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:34:07.288712 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-14 14:34:07.288725 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:34:07.288736 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-14 14:34:07.288746 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:34:07.288757 | orchestrator | 2025-05-14 14:34:07.288768 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-14 14:34:07.288779 | orchestrator | Wednesday 14 May 2025 14:33:03 +0000 (0:00:01.370) 0:00:08.081 ********* 2025-05-14 14:34:07.288790 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:34:07.288800 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:34:07.288811 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:34:07.288822 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:34:07.288832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:34:07.288843 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:34:07.288862 | orchestrator | 2025-05-14 14:34:07.288873 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-14 14:34:07.288884 | orchestrator | Wednesday 14 May 2025 14:33:04 +0000 (0:00:00.584) 0:00:08.666 ********* 2025-05-14 14:34:07.288916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.288933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.288945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.288961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.288974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.288992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289097 | orchestrator | 2025-05-14 14:34:07.289108 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-14 14:34:07.289119 | orchestrator | Wednesday 14 May 2025 14:33:06 +0000 (0:00:02.109) 0:00:10.775 ********* 2025-05-14 14:34:07.289131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289324 | orchestrator | 2025-05-14 14:34:07.289335 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-14 14:34:07.289346 | orchestrator | Wednesday 14 May 2025 14:33:09 +0000 (0:00:03.112) 0:00:13.887 ********* 2025-05-14 14:34:07.289357 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:34:07.289369 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:34:07.289455 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:34:07.289469 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:34:07.289480 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:34:07.289491 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:34:07.289502 | orchestrator | 2025-05-14 14:34:07.289513 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-14 14:34:07.289524 | orchestrator | Wednesday 14 May 2025 14:33:12 +0000 (0:00:02.726) 0:00:16.613 ********* 2025-05-14 14:34:07.289535 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:34:07.289546 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:34:07.289557 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:34:07.289568 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:34:07.289579 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:34:07.289590 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:34:07.289601 | orchestrator | 2025-05-14 14:34:07.289612 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-14 14:34:07.289622 | orchestrator | Wednesday 14 May 2025 14:33:15 +0000 (0:00:02.682) 0:00:19.296 ********* 2025-05-14 14:34:07.289633 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:34:07.289644 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:34:07.289655 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:34:07.289666 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:34:07.289677 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:34:07.289687 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:34:07.289698 | orchestrator | 2025-05-14 14:34:07.289709 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-14 14:34:07.289727 | orchestrator | Wednesday 14 May 2025 14:33:16 +0000 (0:00:01.161) 0:00:20.457 ********* 2025-05-14 14:34:07.289744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 14:34:07.289928 | orchestrator | 2025-05-14 14:34:07.289939 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.289950 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:02.812) 0:00:23.270 ********* 2025-05-14 14:34:07.289961 | orchestrator | 2025-05-14 14:34:07.289972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.289983 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.100) 0:00:23.370 ********* 2025-05-14 14:34:07.289994 | orchestrator | 2025-05-14 14:34:07.290005 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.290086 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.242) 0:00:23.613 ********* 2025-05-14 14:34:07.290126 | orchestrator | 2025-05-14 14:34:07.290143 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.290163 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.178) 0:00:23.791 ********* 2025-05-14 14:34:07.290183 | orchestrator | 2025-05-14 14:34:07.290201 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.290220 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.294) 0:00:24.085 ********* 2025-05-14 14:34:07.290231 | orchestrator | 2025-05-14 14:34:07.290242 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 14:34:07.290253 | orchestrator | Wednesday 14 May 2025 14:33:20 +0000 (0:00:00.217) 0:00:24.303 ********* 2025-05-14 14:34:07.290264 | orchestrator | 2025-05-14 14:34:07.290293 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-14 14:34:07.290305 | orchestrator | Wednesday 14 May 2025 14:33:20 +0000 (0:00:00.475) 0:00:24.778 ********* 2025-05-14 14:34:07.290315 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:34:07.290327 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:34:07.290337 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:34:07.290348 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:34:07.290359 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:34:07.290369 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:34:07.290380 | orchestrator | 2025-05-14 14:34:07.290391 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-14 14:34:07.290402 | orchestrator | Wednesday 14 May 2025 14:33:31 +0000 (0:00:10.574) 0:00:35.352 ********* 2025-05-14 14:34:07.290422 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:34:07.290434 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:34:07.290445 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:34:07.290455 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:34:07.290466 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:34:07.290476 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:34:07.290487 | orchestrator | 2025-05-14 14:34:07.290498 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 14:34:07.290509 | orchestrator | Wednesday 14 May 2025 14:33:33 +0000 (0:00:01.981) 0:00:37.334 ********* 2025-05-14 14:34:07.290519 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:34:07.290530 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:34:07.290541 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:34:07.290561 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:34:07.290572 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:34:07.290582 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:34:07.290593 | orchestrator | 2025-05-14 14:34:07.290612 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-14 14:34:07.290624 | orchestrator | Wednesday 14 May 2025 14:33:41 +0000 (0:00:08.826) 0:00:46.160 ********* 2025-05-14 14:34:07.290635 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-14 14:34:07.290646 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-14 14:34:07.290657 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-14 14:34:07.290668 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-14 14:34:07.290679 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-14 14:34:07.290689 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-14 14:34:07.290700 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-14 14:34:07.290711 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-14 14:34:07.290722 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-14 14:34:07.290733 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-14 14:34:07.290743 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-14 14:34:07.290754 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-14 14:34:07.290765 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290776 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290791 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290802 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290812 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290823 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 14:34:07.290833 | orchestrator | 2025-05-14 14:34:07.290844 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-14 14:34:07.290855 | orchestrator | Wednesday 14 May 2025 14:33:49 +0000 (0:00:07.832) 0:00:53.992 ********* 2025-05-14 14:34:07.290866 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-14 14:34:07.290877 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:34:07.290887 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-14 14:34:07.290898 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:34:07.290909 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-14 14:34:07.290920 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:34:07.290931 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-14 14:34:07.290941 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-14 14:34:07.290952 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-14 14:34:07.290970 | orchestrator | 2025-05-14 14:34:07.290981 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-14 14:34:07.290992 | orchestrator | Wednesday 14 May 2025 14:33:52 +0000 (0:00:03.165) 0:00:57.158 ********* 2025-05-14 14:34:07.291003 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-14 14:34:07.291013 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:34:07.291024 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-14 14:34:07.291035 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:34:07.291045 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-14 14:34:07.291056 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:34:07.291067 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-14 14:34:07.291083 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-14 14:34:07.291095 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-14 14:34:07.291106 | orchestrator | 2025-05-14 14:34:07.291117 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 14:34:07.291127 | orchestrator | Wednesday 14 May 2025 14:33:57 +0000 (0:00:04.410) 0:01:01.569 ********* 2025-05-14 14:34:07.291138 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:34:07.291149 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:34:07.291160 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:34:07.291170 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:34:07.291181 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:34:07.291192 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:34:07.291202 | orchestrator | 2025-05-14 14:34:07.291213 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:34:07.291225 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:34:07.291237 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:34:07.291248 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:34:07.291259 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:34:07.291287 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:34:07.291298 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:34:07.291309 | orchestrator | 2025-05-14 14:34:07.291320 | orchestrator | 2025-05-14 14:34:07.291331 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:34:07.291342 | orchestrator | Wednesday 14 May 2025 14:34:06 +0000 (0:00:08.786) 0:01:10.355 ********* 2025-05-14 14:34:07.291353 | orchestrator | =============================================================================== 2025-05-14 14:34:07.291364 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.61s 2025-05-14 14:34:07.291374 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.57s 2025-05-14 14:34:07.291385 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.83s 2025-05-14 14:34:07.291396 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.41s 2025-05-14 14:34:07.291407 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.17s 2025-05-14 14:34:07.291417 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.11s 2025-05-14 14:34:07.291428 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.81s 2025-05-14 14:34:07.291446 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.73s 2025-05-14 14:34:07.291461 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.68s 2025-05-14 14:34:07.291472 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.22s 2025-05-14 14:34:07.291483 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.11s 2025-05-14 14:34:07.291494 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.98s 2025-05-14 14:34:07.291505 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.51s 2025-05-14 14:34:07.291516 | orchestrator | module-load : Load modules ---------------------------------------------- 1.40s 2025-05-14 14:34:07.291526 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.37s 2025-05-14 14:34:07.291537 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.29s 2025-05-14 14:34:07.291548 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.16s 2025-05-14 14:34:07.291559 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-14 14:34:07.291570 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-05-14 14:34:07.291580 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.58s 2025-05-14 14:34:07.291591 | orchestrator | 2025-05-14 14:34:07 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:07.291603 | orchestrator | 2025-05-14 14:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:10.330783 | orchestrator | 2025-05-14 14:34:10 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:10.331799 | orchestrator | 2025-05-14 14:34:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:10.333454 | orchestrator | 2025-05-14 14:34:10 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:10.334595 | orchestrator | 2025-05-14 14:34:10 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:10.337438 | orchestrator | 2025-05-14 14:34:10 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:10.337471 | orchestrator | 2025-05-14 14:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:13.380014 | orchestrator | 2025-05-14 14:34:13 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:13.380414 | orchestrator | 2025-05-14 14:34:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:13.380981 | orchestrator | 2025-05-14 14:34:13 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:13.384783 | orchestrator | 2025-05-14 14:34:13 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:13.385416 | orchestrator | 2025-05-14 14:34:13 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:13.385445 | orchestrator | 2025-05-14 14:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:16.437670 | orchestrator | 2025-05-14 14:34:16 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:16.438490 | orchestrator | 2025-05-14 14:34:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:16.439115 | orchestrator | 2025-05-14 14:34:16 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:16.441164 | orchestrator | 2025-05-14 14:34:16 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:16.444547 | orchestrator | 2025-05-14 14:34:16 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:16.444638 | orchestrator | 2025-05-14 14:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:19.482323 | orchestrator | 2025-05-14 14:34:19 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:19.485125 | orchestrator | 2025-05-14 14:34:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:19.485147 | orchestrator | 2025-05-14 14:34:19 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:19.485156 | orchestrator | 2025-05-14 14:34:19 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:19.485164 | orchestrator | 2025-05-14 14:34:19 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:19.485172 | orchestrator | 2025-05-14 14:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:22.545668 | orchestrator | 2025-05-14 14:34:22 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:22.547450 | orchestrator | 2025-05-14 14:34:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:22.548176 | orchestrator | 2025-05-14 14:34:22 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:22.549813 | orchestrator | 2025-05-14 14:34:22 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:22.550537 | orchestrator | 2025-05-14 14:34:22 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:22.550555 | orchestrator | 2025-05-14 14:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:25.583628 | orchestrator | 2025-05-14 14:34:25 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:25.584386 | orchestrator | 2025-05-14 14:34:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:25.585331 | orchestrator | 2025-05-14 14:34:25 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:25.586973 | orchestrator | 2025-05-14 14:34:25 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:25.588104 | orchestrator | 2025-05-14 14:34:25 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:25.588129 | orchestrator | 2025-05-14 14:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:28.639060 | orchestrator | 2025-05-14 14:34:28 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:28.641030 | orchestrator | 2025-05-14 14:34:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:28.644676 | orchestrator | 2025-05-14 14:34:28 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:28.645507 | orchestrator | 2025-05-14 14:34:28 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:28.646532 | orchestrator | 2025-05-14 14:34:28 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:28.646642 | orchestrator | 2025-05-14 14:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:31.687077 | orchestrator | 2025-05-14 14:34:31 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:31.687932 | orchestrator | 2025-05-14 14:34:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:31.688183 | orchestrator | 2025-05-14 14:34:31 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:31.689056 | orchestrator | 2025-05-14 14:34:31 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:31.696812 | orchestrator | 2025-05-14 14:34:31 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:31.696841 | orchestrator | 2025-05-14 14:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:34.741216 | orchestrator | 2025-05-14 14:34:34 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:34.741710 | orchestrator | 2025-05-14 14:34:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:34.747588 | orchestrator | 2025-05-14 14:34:34 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:34.750673 | orchestrator | 2025-05-14 14:34:34 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:34.752354 | orchestrator | 2025-05-14 14:34:34 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:34.752530 | orchestrator | 2025-05-14 14:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:37.791837 | orchestrator | 2025-05-14 14:34:37 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:37.792503 | orchestrator | 2025-05-14 14:34:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:37.794984 | orchestrator | 2025-05-14 14:34:37 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:37.804017 | orchestrator | 2025-05-14 14:34:37 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:37.812957 | orchestrator | 2025-05-14 14:34:37 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:37.813030 | orchestrator | 2025-05-14 14:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:40.847465 | orchestrator | 2025-05-14 14:34:40 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:40.848657 | orchestrator | 2025-05-14 14:34:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:40.850493 | orchestrator | 2025-05-14 14:34:40 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:40.851927 | orchestrator | 2025-05-14 14:34:40 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:40.853332 | orchestrator | 2025-05-14 14:34:40 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:40.853367 | orchestrator | 2025-05-14 14:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:43.888482 | orchestrator | 2025-05-14 14:34:43 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:43.893845 | orchestrator | 2025-05-14 14:34:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:43.898652 | orchestrator | 2025-05-14 14:34:43 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:43.900954 | orchestrator | 2025-05-14 14:34:43 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:43.903145 | orchestrator | 2025-05-14 14:34:43 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:43.903527 | orchestrator | 2025-05-14 14:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:46.947811 | orchestrator | 2025-05-14 14:34:46 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:46.947980 | orchestrator | 2025-05-14 14:34:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:46.948696 | orchestrator | 2025-05-14 14:34:46 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:46.949356 | orchestrator | 2025-05-14 14:34:46 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:46.952198 | orchestrator | 2025-05-14 14:34:46 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:46.952253 | orchestrator | 2025-05-14 14:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:50.004151 | orchestrator | 2025-05-14 14:34:50 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:50.004894 | orchestrator | 2025-05-14 14:34:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:50.005957 | orchestrator | 2025-05-14 14:34:50 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:50.009371 | orchestrator | 2025-05-14 14:34:50 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:50.010410 | orchestrator | 2025-05-14 14:34:50 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:50.010451 | orchestrator | 2025-05-14 14:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:53.048746 | orchestrator | 2025-05-14 14:34:53 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:53.049665 | orchestrator | 2025-05-14 14:34:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:53.051357 | orchestrator | 2025-05-14 14:34:53 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:53.051890 | orchestrator | 2025-05-14 14:34:53 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:53.052821 | orchestrator | 2025-05-14 14:34:53 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:53.053019 | orchestrator | 2025-05-14 14:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:56.090641 | orchestrator | 2025-05-14 14:34:56 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:56.091516 | orchestrator | 2025-05-14 14:34:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:56.097239 | orchestrator | 2025-05-14 14:34:56 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:56.097647 | orchestrator | 2025-05-14 14:34:56 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:56.098573 | orchestrator | 2025-05-14 14:34:56 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:56.099299 | orchestrator | 2025-05-14 14:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:34:59.145869 | orchestrator | 2025-05-14 14:34:59 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:34:59.146899 | orchestrator | 2025-05-14 14:34:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:34:59.149397 | orchestrator | 2025-05-14 14:34:59 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:34:59.153601 | orchestrator | 2025-05-14 14:34:59 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:34:59.154455 | orchestrator | 2025-05-14 14:34:59 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:34:59.154523 | orchestrator | 2025-05-14 14:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:02.200774 | orchestrator | 2025-05-14 14:35:02 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:02.205457 | orchestrator | 2025-05-14 14:35:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:02.206067 | orchestrator | 2025-05-14 14:35:02 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:02.207246 | orchestrator | 2025-05-14 14:35:02 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:02.207997 | orchestrator | 2025-05-14 14:35:02 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:02.208083 | orchestrator | 2025-05-14 14:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:05.242686 | orchestrator | 2025-05-14 14:35:05 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:05.244949 | orchestrator | 2025-05-14 14:35:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:05.248824 | orchestrator | 2025-05-14 14:35:05 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:05.250134 | orchestrator | 2025-05-14 14:35:05 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:05.251054 | orchestrator | 2025-05-14 14:35:05 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:05.251081 | orchestrator | 2025-05-14 14:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:08.287287 | orchestrator | 2025-05-14 14:35:08 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:08.288534 | orchestrator | 2025-05-14 14:35:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:08.289381 | orchestrator | 2025-05-14 14:35:08 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:08.292554 | orchestrator | 2025-05-14 14:35:08 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:08.293064 | orchestrator | 2025-05-14 14:35:08 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:08.293137 | orchestrator | 2025-05-14 14:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:11.328387 | orchestrator | 2025-05-14 14:35:11 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:11.328639 | orchestrator | 2025-05-14 14:35:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:11.329959 | orchestrator | 2025-05-14 14:35:11 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:11.330386 | orchestrator | 2025-05-14 14:35:11 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:11.330968 | orchestrator | 2025-05-14 14:35:11 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:11.330995 | orchestrator | 2025-05-14 14:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:14.364681 | orchestrator | 2025-05-14 14:35:14 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:14.365034 | orchestrator | 2025-05-14 14:35:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:14.365621 | orchestrator | 2025-05-14 14:35:14 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:14.366139 | orchestrator | 2025-05-14 14:35:14 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:14.367211 | orchestrator | 2025-05-14 14:35:14 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:14.367329 | orchestrator | 2025-05-14 14:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:17.404921 | orchestrator | 2025-05-14 14:35:17 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:17.407087 | orchestrator | 2025-05-14 14:35:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:17.407119 | orchestrator | 2025-05-14 14:35:17 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:17.407131 | orchestrator | 2025-05-14 14:35:17 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:17.407494 | orchestrator | 2025-05-14 14:35:17 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:17.407522 | orchestrator | 2025-05-14 14:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:20.439024 | orchestrator | 2025-05-14 14:35:20 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:20.439341 | orchestrator | 2025-05-14 14:35:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:20.439872 | orchestrator | 2025-05-14 14:35:20 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:20.440451 | orchestrator | 2025-05-14 14:35:20 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:20.441014 | orchestrator | 2025-05-14 14:35:20 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:20.441064 | orchestrator | 2025-05-14 14:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:23.470516 | orchestrator | 2025-05-14 14:35:23 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state STARTED 2025-05-14 14:35:23.470580 | orchestrator | 2025-05-14 14:35:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:23.470593 | orchestrator | 2025-05-14 14:35:23 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:23.470604 | orchestrator | 2025-05-14 14:35:23 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:23.471529 | orchestrator | 2025-05-14 14:35:23 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:23.471621 | orchestrator | 2025-05-14 14:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:26.508691 | orchestrator | 2025-05-14 14:35:26 | INFO  | Task e2a8530f-d783-4dde-823b-d4e7a84b1a70 is in state SUCCESS 2025-05-14 14:35:26.509871 | orchestrator | 2025-05-14 14:35:26.509914 | orchestrator | 2025-05-14 14:35:26.509927 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-14 14:35:26.509939 | orchestrator | 2025-05-14 14:35:26.509950 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 14:35:26.509961 | orchestrator | Wednesday 14 May 2025 14:33:16 +0000 (0:00:00.138) 0:00:00.138 ********* 2025-05-14 14:35:26.509972 | orchestrator | ok: [localhost] => { 2025-05-14 14:35:26.509985 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-14 14:35:26.509996 | orchestrator | } 2025-05-14 14:35:26.510007 | orchestrator | 2025-05-14 14:35:26.510097 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-14 14:35:26.510114 | orchestrator | Wednesday 14 May 2025 14:33:16 +0000 (0:00:00.059) 0:00:00.197 ********* 2025-05-14 14:35:26.510145 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-14 14:35:26.510157 | orchestrator | ...ignoring 2025-05-14 14:35:26.510169 | orchestrator | 2025-05-14 14:35:26.510214 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-14 14:35:26.510225 | orchestrator | Wednesday 14 May 2025 14:33:18 +0000 (0:00:02.753) 0:00:02.951 ********* 2025-05-14 14:35:26.510236 | orchestrator | skipping: [localhost] 2025-05-14 14:35:26.510246 | orchestrator | 2025-05-14 14:35:26.510257 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-14 14:35:26.510267 | orchestrator | Wednesday 14 May 2025 14:33:18 +0000 (0:00:00.077) 0:00:03.029 ********* 2025-05-14 14:35:26.510278 | orchestrator | ok: [localhost] 2025-05-14 14:35:26.510289 | orchestrator | 2025-05-14 14:35:26.510299 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:35:26.510310 | orchestrator | 2025-05-14 14:35:26.510321 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:35:26.510331 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.138) 0:00:03.167 ********* 2025-05-14 14:35:26.510342 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:35:26.510352 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:35:26.510363 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:35:26.510373 | orchestrator | 2025-05-14 14:35:26.510384 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:35:26.510395 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.333) 0:00:03.501 ********* 2025-05-14 14:35:26.510405 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-14 14:35:26.510417 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-14 14:35:26.510434 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-14 14:35:26.510445 | orchestrator | 2025-05-14 14:35:26.510455 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-14 14:35:26.510467 | orchestrator | 2025-05-14 14:35:26.510479 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 14:35:26.510491 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:00.517) 0:00:04.019 ********* 2025-05-14 14:35:26.510504 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:35:26.510516 | orchestrator | 2025-05-14 14:35:26.510528 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 14:35:26.510541 | orchestrator | Wednesday 14 May 2025 14:33:21 +0000 (0:00:01.751) 0:00:05.770 ********* 2025-05-14 14:35:26.510553 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:35:26.510565 | orchestrator | 2025-05-14 14:35:26.510575 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-14 14:35:26.510586 | orchestrator | Wednesday 14 May 2025 14:33:23 +0000 (0:00:01.731) 0:00:07.501 ********* 2025-05-14 14:35:26.510597 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510608 | orchestrator | 2025-05-14 14:35:26.510618 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-14 14:35:26.510629 | orchestrator | Wednesday 14 May 2025 14:33:23 +0000 (0:00:00.446) 0:00:07.948 ********* 2025-05-14 14:35:26.510639 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510649 | orchestrator | 2025-05-14 14:35:26.510660 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-14 14:35:26.510670 | orchestrator | Wednesday 14 May 2025 14:33:24 +0000 (0:00:00.522) 0:00:08.470 ********* 2025-05-14 14:35:26.510681 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510691 | orchestrator | 2025-05-14 14:35:26.510702 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-14 14:35:26.510712 | orchestrator | Wednesday 14 May 2025 14:33:24 +0000 (0:00:00.326) 0:00:08.797 ********* 2025-05-14 14:35:26.510723 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510733 | orchestrator | 2025-05-14 14:35:26.510752 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 14:35:26.510762 | orchestrator | Wednesday 14 May 2025 14:33:25 +0000 (0:00:00.375) 0:00:09.173 ********* 2025-05-14 14:35:26.510773 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:35:26.510784 | orchestrator | 2025-05-14 14:35:26.510794 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 14:35:26.510805 | orchestrator | Wednesday 14 May 2025 14:33:25 +0000 (0:00:00.760) 0:00:09.933 ********* 2025-05-14 14:35:26.510815 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:35:26.510826 | orchestrator | 2025-05-14 14:35:26.510836 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-14 14:35:26.510847 | orchestrator | Wednesday 14 May 2025 14:33:26 +0000 (0:00:00.740) 0:00:10.673 ********* 2025-05-14 14:35:26.510857 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510867 | orchestrator | 2025-05-14 14:35:26.510878 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-14 14:35:26.510889 | orchestrator | Wednesday 14 May 2025 14:33:26 +0000 (0:00:00.292) 0:00:10.966 ********* 2025-05-14 14:35:26.510899 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.510910 | orchestrator | 2025-05-14 14:35:26.510932 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-14 14:35:26.510943 | orchestrator | Wednesday 14 May 2025 14:33:27 +0000 (0:00:00.328) 0:00:11.294 ********* 2025-05-14 14:35:26.510959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.510981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.510995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511013 | orchestrator | 2025-05-14 14:35:26.511025 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-14 14:35:26.511036 | orchestrator | Wednesday 14 May 2025 14:33:27 +0000 (0:00:00.780) 0:00:12.075 ********* 2025-05-14 14:35:26.511057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511106 | orchestrator | 2025-05-14 14:35:26.511117 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-14 14:35:26.511128 | orchestrator | Wednesday 14 May 2025 14:33:29 +0000 (0:00:01.448) 0:00:13.524 ********* 2025-05-14 14:35:26.511139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 14:35:26.511149 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 14:35:26.511160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 14:35:26.511215 | orchestrator | 2025-05-14 14:35:26.511229 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-14 14:35:26.511240 | orchestrator | Wednesday 14 May 2025 14:33:31 +0000 (0:00:01.921) 0:00:15.446 ********* 2025-05-14 14:35:26.511251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 14:35:26.511262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 14:35:26.511272 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 14:35:26.511283 | orchestrator | 2025-05-14 14:35:26.511294 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-14 14:35:26.511305 | orchestrator | Wednesday 14 May 2025 14:33:34 +0000 (0:00:03.012) 0:00:18.458 ********* 2025-05-14 14:35:26.511315 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 14:35:26.511326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 14:35:26.511337 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 14:35:26.511347 | orchestrator | 2025-05-14 14:35:26.511385 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-14 14:35:26.511397 | orchestrator | Wednesday 14 May 2025 14:33:36 +0000 (0:00:01.711) 0:00:20.169 ********* 2025-05-14 14:35:26.511408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 14:35:26.511419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 14:35:26.511429 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 14:35:26.511440 | orchestrator | 2025-05-14 14:35:26.511451 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-14 14:35:26.511461 | orchestrator | Wednesday 14 May 2025 14:33:37 +0000 (0:00:01.918) 0:00:22.087 ********* 2025-05-14 14:35:26.511472 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 14:35:26.511483 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 14:35:26.511494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 14:35:26.511504 | orchestrator | 2025-05-14 14:35:26.511515 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-14 14:35:26.511526 | orchestrator | Wednesday 14 May 2025 14:33:39 +0000 (0:00:01.621) 0:00:23.709 ********* 2025-05-14 14:35:26.511536 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 14:35:26.511547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 14:35:26.511558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 14:35:26.511568 | orchestrator | 2025-05-14 14:35:26.511579 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 14:35:26.511598 | orchestrator | Wednesday 14 May 2025 14:33:41 +0000 (0:00:01.500) 0:00:25.209 ********* 2025-05-14 14:35:26.511609 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.511625 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:35:26.511643 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:35:26.511661 | orchestrator | 2025-05-14 14:35:26.511680 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-14 14:35:26.511694 | orchestrator | Wednesday 14 May 2025 14:33:41 +0000 (0:00:00.465) 0:00:25.675 ********* 2025-05-14 14:35:26.511712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:35:26.511762 | orchestrator | 2025-05-14 14:35:26.511773 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-14 14:35:26.511791 | orchestrator | Wednesday 14 May 2025 14:33:43 +0000 (0:00:01.616) 0:00:27.291 ********* 2025-05-14 14:35:26.511802 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:35:26.511813 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:35:26.511823 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:35:26.511833 | orchestrator | 2025-05-14 14:35:26.511844 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-14 14:35:26.511855 | orchestrator | Wednesday 14 May 2025 14:33:44 +0000 (0:00:00.906) 0:00:28.197 ********* 2025-05-14 14:35:26.511865 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:35:26.511876 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:35:26.511887 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:35:26.511897 | orchestrator | 2025-05-14 14:35:26.511908 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-14 14:35:26.511919 | orchestrator | Wednesday 14 May 2025 14:33:49 +0000 (0:00:05.805) 0:00:34.003 ********* 2025-05-14 14:35:26.511929 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:35:26.511940 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:35:26.511950 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:35:26.511961 | orchestrator | 2025-05-14 14:35:26.511972 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 14:35:26.511982 | orchestrator | 2025-05-14 14:35:26.511997 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 14:35:26.512008 | orchestrator | Wednesday 14 May 2025 14:33:50 +0000 (0:00:00.695) 0:00:34.699 ********* 2025-05-14 14:35:26.512019 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:35:26.512029 | orchestrator | 2025-05-14 14:35:26.512040 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 14:35:26.512050 | orchestrator | Wednesday 14 May 2025 14:33:51 +0000 (0:00:00.647) 0:00:35.347 ********* 2025-05-14 14:35:26.512061 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:35:26.512071 | orchestrator | 2025-05-14 14:35:26.512082 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 14:35:26.512093 | orchestrator | Wednesday 14 May 2025 14:33:51 +0000 (0:00:00.739) 0:00:36.086 ********* 2025-05-14 14:35:26.512103 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:35:26.512114 | orchestrator | 2025-05-14 14:35:26.512125 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 14:35:26.512136 | orchestrator | Wednesday 14 May 2025 14:33:58 +0000 (0:00:06.879) 0:00:42.966 ********* 2025-05-14 14:35:26.512147 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:35:26.512157 | orchestrator | 2025-05-14 14:35:26.512168 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 14:35:26.512226 | orchestrator | 2025-05-14 14:35:26.512238 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 14:35:26.512249 | orchestrator | Wednesday 14 May 2025 14:34:47 +0000 (0:00:48.876) 0:01:31.842 ********* 2025-05-14 14:35:26.512259 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:35:26.512270 | orchestrator | 2025-05-14 14:35:26.512280 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 14:35:26.512291 | orchestrator | Wednesday 14 May 2025 14:34:48 +0000 (0:00:00.803) 0:01:32.646 ********* 2025-05-14 14:35:26.512302 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:35:26.512313 | orchestrator | 2025-05-14 14:35:26.512323 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 14:35:26.512334 | orchestrator | Wednesday 14 May 2025 14:34:48 +0000 (0:00:00.266) 0:01:32.912 ********* 2025-05-14 14:35:26.512345 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:35:26.512355 | orchestrator | 2025-05-14 14:35:26.512366 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 14:35:26.512376 | orchestrator | Wednesday 14 May 2025 14:34:50 +0000 (0:00:01.701) 0:01:34.613 ********* 2025-05-14 14:35:26.512387 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:35:26.512398 | orchestrator | 2025-05-14 14:35:26.512408 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 14:35:26.512426 | orchestrator | 2025-05-14 14:35:26.512437 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 14:35:26.512447 | orchestrator | Wednesday 14 May 2025 14:35:04 +0000 (0:00:14.442) 0:01:49.056 ********* 2025-05-14 14:35:26.512458 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:35:26.512469 | orchestrator | 2025-05-14 14:35:26.512479 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 14:35:26.512490 | orchestrator | Wednesday 14 May 2025 14:35:05 +0000 (0:00:00.583) 0:01:49.640 ********* 2025-05-14 14:35:26.512500 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:35:26.512511 | orchestrator | 2025-05-14 14:35:26.512522 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 14:35:26.512539 | orchestrator | Wednesday 14 May 2025 14:35:05 +0000 (0:00:00.225) 0:01:49.865 ********* 2025-05-14 14:35:26.512551 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:35:26.512561 | orchestrator | 2025-05-14 14:35:26.512572 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 14:35:26.512583 | orchestrator | Wednesday 14 May 2025 14:35:07 +0000 (0:00:01.752) 0:01:51.617 ********* 2025-05-14 14:35:26.512593 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:35:26.512604 | orchestrator | 2025-05-14 14:35:26.512615 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-14 14:35:26.512625 | orchestrator | 2025-05-14 14:35:26.512636 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-14 14:35:26.512647 | orchestrator | Wednesday 14 May 2025 14:35:20 +0000 (0:00:13.447) 0:02:05.065 ********* 2025-05-14 14:35:26.512658 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:35:26.512669 | orchestrator | 2025-05-14 14:35:26.512680 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-14 14:35:26.512690 | orchestrator | Wednesday 14 May 2025 14:35:21 +0000 (0:00:00.795) 0:02:05.861 ********* 2025-05-14 14:35:26.512701 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 14:35:26.512711 | orchestrator | enable_outward_rabbitmq_True 2025-05-14 14:35:26.512722 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 14:35:26.512732 | orchestrator | outward_rabbitmq_restart 2025-05-14 14:35:26.512743 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:35:26.512753 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:35:26.512764 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:35:26.512774 | orchestrator | 2025-05-14 14:35:26.512785 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-14 14:35:26.512796 | orchestrator | skipping: no hosts matched 2025-05-14 14:35:26.512806 | orchestrator | 2025-05-14 14:35:26.512817 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-14 14:35:26.512827 | orchestrator | skipping: no hosts matched 2025-05-14 14:35:26.512838 | orchestrator | 2025-05-14 14:35:26.512848 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-14 14:35:26.512859 | orchestrator | skipping: no hosts matched 2025-05-14 14:35:26.512869 | orchestrator | 2025-05-14 14:35:26.512880 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:35:26.512890 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 14:35:26.512901 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 14:35:26.512912 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:35:26.512923 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:35:26.512939 | orchestrator | 2025-05-14 14:35:26.512950 | orchestrator | 2025-05-14 14:35:26.512960 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:35:26.512971 | orchestrator | Wednesday 14 May 2025 14:35:24 +0000 (0:00:02.898) 0:02:08.759 ********* 2025-05-14 14:35:26.512982 | orchestrator | =============================================================================== 2025-05-14 14:35:26.513009 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.77s 2025-05-14 14:35:26.513020 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.33s 2025-05-14 14:35:26.513030 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.81s 2025-05-14 14:35:26.513041 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.01s 2025-05-14 14:35:26.513052 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.90s 2025-05-14 14:35:26.513063 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.75s 2025-05-14 14:35:26.513073 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2025-05-14 14:35:26.513084 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.92s 2025-05-14 14:35:26.513695 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.92s 2025-05-14 14:35:26.513713 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.75s 2025-05-14 14:35:26.513724 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.73s 2025-05-14 14:35:26.513735 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.71s 2025-05-14 14:35:26.513745 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.62s 2025-05-14 14:35:26.513756 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.62s 2025-05-14 14:35:26.513766 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.50s 2025-05-14 14:35:26.513777 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.45s 2025-05-14 14:35:26.513787 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.23s 2025-05-14 14:35:26.513798 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.91s 2025-05-14 14:35:26.513809 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.79s 2025-05-14 14:35:26.513819 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.78s 2025-05-14 14:35:26.513972 | orchestrator | 2025-05-14 14:35:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:26.513989 | orchestrator | 2025-05-14 14:35:26 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:26.514001 | orchestrator | 2025-05-14 14:35:26 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:26.514393 | orchestrator | 2025-05-14 14:35:26 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:26.514433 | orchestrator | 2025-05-14 14:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:29.553506 | orchestrator | 2025-05-14 14:35:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:29.553621 | orchestrator | 2025-05-14 14:35:29 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:29.554853 | orchestrator | 2025-05-14 14:35:29 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:29.557893 | orchestrator | 2025-05-14 14:35:29 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:29.557931 | orchestrator | 2025-05-14 14:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:32.602321 | orchestrator | 2025-05-14 14:35:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:32.603422 | orchestrator | 2025-05-14 14:35:32 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:32.605067 | orchestrator | 2025-05-14 14:35:32 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:32.606413 | orchestrator | 2025-05-14 14:35:32 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:32.606438 | orchestrator | 2025-05-14 14:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:35.657265 | orchestrator | 2025-05-14 14:35:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:35.660617 | orchestrator | 2025-05-14 14:35:35 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:35.667031 | orchestrator | 2025-05-14 14:35:35 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:35.667076 | orchestrator | 2025-05-14 14:35:35 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:35.667089 | orchestrator | 2025-05-14 14:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:38.712486 | orchestrator | 2025-05-14 14:35:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:38.712731 | orchestrator | 2025-05-14 14:35:38 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:38.713335 | orchestrator | 2025-05-14 14:35:38 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:38.714488 | orchestrator | 2025-05-14 14:35:38 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:38.714531 | orchestrator | 2025-05-14 14:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:41.759898 | orchestrator | 2025-05-14 14:35:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:41.760739 | orchestrator | 2025-05-14 14:35:41 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:41.762281 | orchestrator | 2025-05-14 14:35:41 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:41.763587 | orchestrator | 2025-05-14 14:35:41 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:41.763615 | orchestrator | 2025-05-14 14:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:44.823218 | orchestrator | 2025-05-14 14:35:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:44.823312 | orchestrator | 2025-05-14 14:35:44 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:44.823463 | orchestrator | 2025-05-14 14:35:44 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:44.825616 | orchestrator | 2025-05-14 14:35:44 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:44.825691 | orchestrator | 2025-05-14 14:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:47.868567 | orchestrator | 2025-05-14 14:35:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:47.869641 | orchestrator | 2025-05-14 14:35:47 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:47.870678 | orchestrator | 2025-05-14 14:35:47 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:47.872320 | orchestrator | 2025-05-14 14:35:47 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:47.872377 | orchestrator | 2025-05-14 14:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:50.922422 | orchestrator | 2025-05-14 14:35:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:50.923683 | orchestrator | 2025-05-14 14:35:50 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:50.926694 | orchestrator | 2025-05-14 14:35:50 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:50.928986 | orchestrator | 2025-05-14 14:35:50 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:50.929009 | orchestrator | 2025-05-14 14:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:53.978857 | orchestrator | 2025-05-14 14:35:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:53.979069 | orchestrator | 2025-05-14 14:35:53 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:53.980048 | orchestrator | 2025-05-14 14:35:53 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:53.981100 | orchestrator | 2025-05-14 14:35:53 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:53.981124 | orchestrator | 2025-05-14 14:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:35:57.044507 | orchestrator | 2025-05-14 14:35:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:35:57.044610 | orchestrator | 2025-05-14 14:35:57 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:35:57.045513 | orchestrator | 2025-05-14 14:35:57 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:35:57.046718 | orchestrator | 2025-05-14 14:35:57 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:35:57.046745 | orchestrator | 2025-05-14 14:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:00.105316 | orchestrator | 2025-05-14 14:36:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:00.106872 | orchestrator | 2025-05-14 14:36:00 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:00.108474 | orchestrator | 2025-05-14 14:36:00 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:00.110140 | orchestrator | 2025-05-14 14:36:00 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:00.110197 | orchestrator | 2025-05-14 14:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:03.169996 | orchestrator | 2025-05-14 14:36:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:03.170363 | orchestrator | 2025-05-14 14:36:03 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:03.172246 | orchestrator | 2025-05-14 14:36:03 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:03.173180 | orchestrator | 2025-05-14 14:36:03 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:03.173225 | orchestrator | 2025-05-14 14:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:06.228564 | orchestrator | 2025-05-14 14:36:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:06.229013 | orchestrator | 2025-05-14 14:36:06 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:06.230093 | orchestrator | 2025-05-14 14:36:06 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:06.230834 | orchestrator | 2025-05-14 14:36:06 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:06.230865 | orchestrator | 2025-05-14 14:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:09.274551 | orchestrator | 2025-05-14 14:36:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:09.276415 | orchestrator | 2025-05-14 14:36:09 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:09.276831 | orchestrator | 2025-05-14 14:36:09 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:09.277681 | orchestrator | 2025-05-14 14:36:09 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:09.277736 | orchestrator | 2025-05-14 14:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:12.320723 | orchestrator | 2025-05-14 14:36:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:12.321197 | orchestrator | 2025-05-14 14:36:12 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:12.322841 | orchestrator | 2025-05-14 14:36:12 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:12.324115 | orchestrator | 2025-05-14 14:36:12 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:12.324238 | orchestrator | 2025-05-14 14:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:15.359397 | orchestrator | 2025-05-14 14:36:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:15.361597 | orchestrator | 2025-05-14 14:36:15 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:15.363388 | orchestrator | 2025-05-14 14:36:15 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:15.364813 | orchestrator | 2025-05-14 14:36:15 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:15.365065 | orchestrator | 2025-05-14 14:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:18.413887 | orchestrator | 2025-05-14 14:36:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:18.414287 | orchestrator | 2025-05-14 14:36:18 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:18.414901 | orchestrator | 2025-05-14 14:36:18 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:18.416290 | orchestrator | 2025-05-14 14:36:18 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:18.416364 | orchestrator | 2025-05-14 14:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:21.470786 | orchestrator | 2025-05-14 14:36:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:21.471553 | orchestrator | 2025-05-14 14:36:21 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:21.471602 | orchestrator | 2025-05-14 14:36:21 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:21.471872 | orchestrator | 2025-05-14 14:36:21 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:21.471888 | orchestrator | 2025-05-14 14:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:24.514644 | orchestrator | 2025-05-14 14:36:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:24.514785 | orchestrator | 2025-05-14 14:36:24 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:24.516658 | orchestrator | 2025-05-14 14:36:24 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state STARTED 2025-05-14 14:36:24.518349 | orchestrator | 2025-05-14 14:36:24 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:24.518377 | orchestrator | 2025-05-14 14:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:27.547235 | orchestrator | 2025-05-14 14:36:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:27.547558 | orchestrator | 2025-05-14 14:36:27 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:27.548353 | orchestrator | 2025-05-14 14:36:27 | INFO  | Task 5c6c00fa-7dd0-4b49-9f51-80a42b279710 is in state SUCCESS 2025-05-14 14:36:27.549792 | orchestrator | 2025-05-14 14:36:27.549819 | orchestrator | 2025-05-14 14:36:27.549831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:36:27.549843 | orchestrator | 2025-05-14 14:36:27.549854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:36:27.549867 | orchestrator | Wednesday 14 May 2025 14:34:09 +0000 (0:00:00.230) 0:00:00.230 ********* 2025-05-14 14:36:27.549879 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:36:27.549892 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:36:27.549903 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:36:27.549913 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.549925 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.549936 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.549948 | orchestrator | 2025-05-14 14:36:27.549959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:36:27.549971 | orchestrator | Wednesday 14 May 2025 14:34:10 +0000 (0:00:00.730) 0:00:00.961 ********* 2025-05-14 14:36:27.549982 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-14 14:36:27.549994 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-14 14:36:27.550006 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-14 14:36:27.550076 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-14 14:36:27.550091 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-14 14:36:27.550103 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-14 14:36:27.550115 | orchestrator | 2025-05-14 14:36:27.550127 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-14 14:36:27.550140 | orchestrator | 2025-05-14 14:36:27.550152 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-14 14:36:27.550261 | orchestrator | Wednesday 14 May 2025 14:34:11 +0000 (0:00:01.205) 0:00:02.166 ********* 2025-05-14 14:36:27.550277 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:36:27.550290 | orchestrator | 2025-05-14 14:36:27.550303 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-14 14:36:27.550315 | orchestrator | Wednesday 14 May 2025 14:34:13 +0000 (0:00:01.639) 0:00:03.806 ********* 2025-05-14 14:36:27.550329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550498 | orchestrator | 2025-05-14 14:36:27.550510 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-14 14:36:27.550522 | orchestrator | Wednesday 14 May 2025 14:34:15 +0000 (0:00:01.848) 0:00:05.655 ********* 2025-05-14 14:36:27.550535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550566 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550624 | orchestrator | 2025-05-14 14:36:27.550637 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-14 14:36:27.550649 | orchestrator | Wednesday 14 May 2025 14:34:18 +0000 (0:00:02.988) 0:00:08.643 ********* 2025-05-14 14:36:27.550662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550763 | orchestrator | 2025-05-14 14:36:27.550776 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-14 14:36:27.550789 | orchestrator | Wednesday 14 May 2025 14:34:19 +0000 (0:00:01.006) 0:00:09.650 ********* 2025-05-14 14:36:27.550802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550882 | orchestrator | 2025-05-14 14:36:27.550895 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-14 14:36:27.550908 | orchestrator | Wednesday 14 May 2025 14:34:21 +0000 (0:00:02.174) 0:00:11.825 ********* 2025-05-14 14:36:27.550926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.550973 | orchestrator | 2025-05-14 14:36:27.550980 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-14 14:36:27.550987 | orchestrator | Wednesday 14 May 2025 14:34:22 +0000 (0:00:01.361) 0:00:13.186 ********* 2025-05-14 14:36:27.550993 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.551001 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:36:27.551007 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.551014 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:36:27.551020 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.551027 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:36:27.551033 | orchestrator | 2025-05-14 14:36:27.551040 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-14 14:36:27.551047 | orchestrator | Wednesday 14 May 2025 14:34:25 +0000 (0:00:02.911) 0:00:16.097 ********* 2025-05-14 14:36:27.551053 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-14 14:36:27.551060 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-14 14:36:27.551067 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-14 14:36:27.551078 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-14 14:36:27.551085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-14 14:36:27.551091 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-14 14:36:27.551098 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551105 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551116 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551122 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 14:36:27.551146 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551154 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 14:36:27.551207 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551215 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551221 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 14:36:27.551248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551255 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551261 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 14:36:27.551288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551294 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551301 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551321 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 14:36:27.551327 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 14:36:27.551334 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 14:36:27.551345 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 14:36:27.551352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 14:36:27.551363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 14:36:27.551370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 14:36:27.551376 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-14 14:36:27.551383 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-14 14:36:27.551390 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-14 14:36:27.551397 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-14 14:36:27.551407 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-14 14:36:27.551414 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-14 14:36:27.551420 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 14:36:27.551427 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 14:36:27.551434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 14:36:27.551441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 14:36:27.551447 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 14:36:27.551454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 14:36:27.551461 | orchestrator | 2025-05-14 14:36:27.551473 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551480 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:18.694) 0:00:34.791 ********* 2025-05-14 14:36:27.551487 | orchestrator | 2025-05-14 14:36:27.551494 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551500 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.061) 0:00:34.853 ********* 2025-05-14 14:36:27.551507 | orchestrator | 2025-05-14 14:36:27.551513 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551520 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.160) 0:00:35.013 ********* 2025-05-14 14:36:27.551527 | orchestrator | 2025-05-14 14:36:27.551533 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551540 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.066) 0:00:35.080 ********* 2025-05-14 14:36:27.551546 | orchestrator | 2025-05-14 14:36:27.551553 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551560 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.053) 0:00:35.134 ********* 2025-05-14 14:36:27.551567 | orchestrator | 2025-05-14 14:36:27.551573 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 14:36:27.551592 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.072) 0:00:35.206 ********* 2025-05-14 14:36:27.551598 | orchestrator | 2025-05-14 14:36:27.551605 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-14 14:36:27.551612 | orchestrator | Wednesday 14 May 2025 14:34:45 +0000 (0:00:00.058) 0:00:35.264 ********* 2025-05-14 14:36:27.551618 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:36:27.551625 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.551632 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:36:27.551638 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:36:27.551645 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.551651 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.551658 | orchestrator | 2025-05-14 14:36:27.551664 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-14 14:36:27.551671 | orchestrator | Wednesday 14 May 2025 14:34:47 +0000 (0:00:01.978) 0:00:37.242 ********* 2025-05-14 14:36:27.551678 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.551684 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:36:27.551691 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:36:27.551697 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.551704 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:36:27.551710 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.551717 | orchestrator | 2025-05-14 14:36:27.551723 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-14 14:36:27.551730 | orchestrator | 2025-05-14 14:36:27.551736 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 14:36:27.551743 | orchestrator | Wednesday 14 May 2025 14:35:10 +0000 (0:00:23.073) 0:01:00.316 ********* 2025-05-14 14:36:27.551750 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:36:27.551757 | orchestrator | 2025-05-14 14:36:27.551763 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 14:36:27.551770 | orchestrator | Wednesday 14 May 2025 14:35:10 +0000 (0:00:00.447) 0:01:00.763 ********* 2025-05-14 14:36:27.551777 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:36:27.551783 | orchestrator | 2025-05-14 14:36:27.551794 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-14 14:36:27.551801 | orchestrator | Wednesday 14 May 2025 14:35:11 +0000 (0:00:00.752) 0:01:01.516 ********* 2025-05-14 14:36:27.551807 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.551814 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.551821 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.551827 | orchestrator | 2025-05-14 14:36:27.551834 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-14 14:36:27.551841 | orchestrator | Wednesday 14 May 2025 14:35:12 +0000 (0:00:00.965) 0:01:02.482 ********* 2025-05-14 14:36:27.551847 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.551854 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.551864 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.551876 | orchestrator | 2025-05-14 14:36:27.551888 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-14 14:36:27.551899 | orchestrator | Wednesday 14 May 2025 14:35:12 +0000 (0:00:00.356) 0:01:02.839 ********* 2025-05-14 14:36:27.551910 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.551921 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.551932 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.551944 | orchestrator | 2025-05-14 14:36:27.551960 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-14 14:36:27.551972 | orchestrator | Wednesday 14 May 2025 14:35:13 +0000 (0:00:00.797) 0:01:03.636 ********* 2025-05-14 14:36:27.551984 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.551993 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.552000 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.552006 | orchestrator | 2025-05-14 14:36:27.552013 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-14 14:36:27.552025 | orchestrator | Wednesday 14 May 2025 14:35:13 +0000 (0:00:00.562) 0:01:04.199 ********* 2025-05-14 14:36:27.552032 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.552039 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.552045 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.552052 | orchestrator | 2025-05-14 14:36:27.552058 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-14 14:36:27.552065 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:00.222) 0:01:04.421 ********* 2025-05-14 14:36:27.552072 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552085 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552092 | orchestrator | 2025-05-14 14:36:27.552098 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-14 14:36:27.552105 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:00.297) 0:01:04.718 ********* 2025-05-14 14:36:27.552112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552118 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552125 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552132 | orchestrator | 2025-05-14 14:36:27.552138 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-14 14:36:27.552149 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:00.330) 0:01:05.049 ********* 2025-05-14 14:36:27.552160 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552187 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552200 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552212 | orchestrator | 2025-05-14 14:36:27.552223 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-14 14:36:27.552235 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:00.301) 0:01:05.351 ********* 2025-05-14 14:36:27.552245 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552252 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552258 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552265 | orchestrator | 2025-05-14 14:36:27.552271 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-14 14:36:27.552278 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:00.240) 0:01:05.592 ********* 2025-05-14 14:36:27.552285 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552291 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552304 | orchestrator | 2025-05-14 14:36:27.552311 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-14 14:36:27.552317 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:00.328) 0:01:05.920 ********* 2025-05-14 14:36:27.552324 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552331 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552337 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552344 | orchestrator | 2025-05-14 14:36:27.552350 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-14 14:36:27.552357 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.330) 0:01:06.251 ********* 2025-05-14 14:36:27.552363 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552370 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552376 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552383 | orchestrator | 2025-05-14 14:36:27.552389 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-14 14:36:27.552396 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.326) 0:01:06.578 ********* 2025-05-14 14:36:27.552403 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552409 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552416 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552422 | orchestrator | 2025-05-14 14:36:27.552429 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-14 14:36:27.552441 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.248) 0:01:06.826 ********* 2025-05-14 14:36:27.552448 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552454 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552461 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552467 | orchestrator | 2025-05-14 14:36:27.552474 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-14 14:36:27.552481 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.292) 0:01:07.118 ********* 2025-05-14 14:36:27.552487 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552494 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552501 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552507 | orchestrator | 2025-05-14 14:36:27.552519 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-14 14:36:27.552526 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:00.313) 0:01:07.432 ********* 2025-05-14 14:36:27.552532 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552539 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552545 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552552 | orchestrator | 2025-05-14 14:36:27.552559 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-14 14:36:27.552565 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:00.269) 0:01:07.702 ********* 2025-05-14 14:36:27.552572 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552578 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552585 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552591 | orchestrator | 2025-05-14 14:36:27.552598 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 14:36:27.552604 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:00.361) 0:01:08.063 ********* 2025-05-14 14:36:27.552615 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:36:27.552622 | orchestrator | 2025-05-14 14:36:27.552629 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-14 14:36:27.552635 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.709) 0:01:08.773 ********* 2025-05-14 14:36:27.552642 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.552648 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.552655 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.552662 | orchestrator | 2025-05-14 14:36:27.552668 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-14 14:36:27.552675 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.408) 0:01:09.181 ********* 2025-05-14 14:36:27.552681 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.552688 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.552694 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.552701 | orchestrator | 2025-05-14 14:36:27.552708 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-14 14:36:27.552714 | orchestrator | Wednesday 14 May 2025 14:35:19 +0000 (0:00:00.511) 0:01:09.692 ********* 2025-05-14 14:36:27.552721 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552727 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552734 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552740 | orchestrator | 2025-05-14 14:36:27.552747 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-14 14:36:27.552753 | orchestrator | Wednesday 14 May 2025 14:35:19 +0000 (0:00:00.399) 0:01:10.092 ********* 2025-05-14 14:36:27.552760 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552766 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552773 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552779 | orchestrator | 2025-05-14 14:36:27.552786 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-14 14:36:27.552793 | orchestrator | Wednesday 14 May 2025 14:35:20 +0000 (0:00:00.643) 0:01:10.735 ********* 2025-05-14 14:36:27.552804 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552811 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552817 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552824 | orchestrator | 2025-05-14 14:36:27.552830 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-14 14:36:27.552836 | orchestrator | Wednesday 14 May 2025 14:35:20 +0000 (0:00:00.477) 0:01:11.213 ********* 2025-05-14 14:36:27.552843 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552849 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552856 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552862 | orchestrator | 2025-05-14 14:36:27.552869 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-14 14:36:27.552875 | orchestrator | Wednesday 14 May 2025 14:35:21 +0000 (0:00:00.691) 0:01:11.904 ********* 2025-05-14 14:36:27.552882 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552889 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552895 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552902 | orchestrator | 2025-05-14 14:36:27.552908 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-14 14:36:27.552915 | orchestrator | Wednesday 14 May 2025 14:35:22 +0000 (0:00:00.576) 0:01:12.480 ********* 2025-05-14 14:36:27.552921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.552928 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.552934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.552941 | orchestrator | 2025-05-14 14:36:27.552947 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 14:36:27.552954 | orchestrator | Wednesday 14 May 2025 14:35:22 +0000 (0:00:00.410) 0:01:12.891 ********* 2025-05-14 14:36:27.552961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.552970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553292 | orchestrator | 2025-05-14 14:36:27.553299 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 14:36:27.553306 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:01.309) 0:01:14.201 ********* 2025-05-14 14:36:27.553313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553386 | orchestrator | 2025-05-14 14:36:27.553392 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 14:36:27.553399 | orchestrator | Wednesday 14 May 2025 14:35:27 +0000 (0:00:03.816) 0:01:18.018 ********* 2025-05-14 14:36:27.553406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.553538 | orchestrator | 2025-05-14 14:36:27.553545 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.553551 | orchestrator | Wednesday 14 May 2025 14:35:30 +0000 (0:00:02.393) 0:01:20.412 ********* 2025-05-14 14:36:27.553558 | orchestrator | 2025-05-14 14:36:27.553565 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.553571 | orchestrator | Wednesday 14 May 2025 14:35:30 +0000 (0:00:00.094) 0:01:20.506 ********* 2025-05-14 14:36:27.553578 | orchestrator | 2025-05-14 14:36:27.553585 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.553591 | orchestrator | Wednesday 14 May 2025 14:35:30 +0000 (0:00:00.056) 0:01:20.562 ********* 2025-05-14 14:36:27.553598 | orchestrator | 2025-05-14 14:36:27.553604 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 14:36:27.553611 | orchestrator | Wednesday 14 May 2025 14:35:30 +0000 (0:00:00.055) 0:01:20.617 ********* 2025-05-14 14:36:27.553617 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.553624 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.553630 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.553637 | orchestrator | 2025-05-14 14:36:27.553643 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 14:36:27.553650 | orchestrator | Wednesday 14 May 2025 14:35:38 +0000 (0:00:08.022) 0:01:28.640 ********* 2025-05-14 14:36:27.553656 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.553663 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.553669 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.553676 | orchestrator | 2025-05-14 14:36:27.553683 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 14:36:27.553689 | orchestrator | Wednesday 14 May 2025 14:35:40 +0000 (0:00:02.516) 0:01:31.157 ********* 2025-05-14 14:36:27.553695 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.553702 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.553709 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.553715 | orchestrator | 2025-05-14 14:36:27.553722 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 14:36:27.553732 | orchestrator | Wednesday 14 May 2025 14:35:43 +0000 (0:00:02.794) 0:01:33.952 ********* 2025-05-14 14:36:27.553739 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.553746 | orchestrator | 2025-05-14 14:36:27.553752 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 14:36:27.553759 | orchestrator | Wednesday 14 May 2025 14:35:43 +0000 (0:00:00.194) 0:01:34.147 ********* 2025-05-14 14:36:27.553765 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.553772 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.553778 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.553785 | orchestrator | 2025-05-14 14:36:27.553796 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 14:36:27.553803 | orchestrator | Wednesday 14 May 2025 14:35:44 +0000 (0:00:01.025) 0:01:35.172 ********* 2025-05-14 14:36:27.553810 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.553817 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.553825 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.553832 | orchestrator | 2025-05-14 14:36:27.553839 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 14:36:27.553847 | orchestrator | Wednesday 14 May 2025 14:35:45 +0000 (0:00:00.630) 0:01:35.803 ********* 2025-05-14 14:36:27.553855 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.553862 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.553870 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.553877 | orchestrator | 2025-05-14 14:36:27.553884 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 14:36:27.553892 | orchestrator | Wednesday 14 May 2025 14:35:46 +0000 (0:00:01.071) 0:01:36.874 ********* 2025-05-14 14:36:27.553899 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.553906 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.553914 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.553921 | orchestrator | 2025-05-14 14:36:27.553931 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 14:36:27.553939 | orchestrator | Wednesday 14 May 2025 14:35:47 +0000 (0:00:00.608) 0:01:37.483 ********* 2025-05-14 14:36:27.553946 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.553953 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.553961 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.553968 | orchestrator | 2025-05-14 14:36:27.553975 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 14:36:27.553982 | orchestrator | Wednesday 14 May 2025 14:35:48 +0000 (0:00:01.125) 0:01:38.609 ********* 2025-05-14 14:36:27.553990 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.553997 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554004 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554050 | orchestrator | 2025-05-14 14:36:27.554066 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-14 14:36:27.554078 | orchestrator | Wednesday 14 May 2025 14:35:49 +0000 (0:00:00.787) 0:01:39.396 ********* 2025-05-14 14:36:27.554090 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.554101 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554113 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554124 | orchestrator | 2025-05-14 14:36:27.554136 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 14:36:27.554148 | orchestrator | Wednesday 14 May 2025 14:35:49 +0000 (0:00:00.430) 0:01:39.827 ********* 2025-05-14 14:36:27.554161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554202 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554239 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554251 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554266 | orchestrator | 2025-05-14 14:36:27.554273 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 14:36:27.554281 | orchestrator | Wednesday 14 May 2025 14:35:51 +0000 (0:00:01.768) 0:01:41.595 ********* 2025-05-14 14:36:27.554288 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554309 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554343 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554369 | orchestrator | 2025-05-14 14:36:27.554376 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 14:36:27.554383 | orchestrator | Wednesday 14 May 2025 14:35:55 +0000 (0:00:04.035) 0:01:45.630 ********* 2025-05-14 14:36:27.554391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554399 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554411 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554418 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554426 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554434 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554441 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554453 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554461 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:36:27.554468 | orchestrator | 2025-05-14 14:36:27.554475 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.554487 | orchestrator | Wednesday 14 May 2025 14:35:58 +0000 (0:00:03.068) 0:01:48.699 ********* 2025-05-14 14:36:27.554494 | orchestrator | 2025-05-14 14:36:27.554501 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.554509 | orchestrator | Wednesday 14 May 2025 14:35:58 +0000 (0:00:00.074) 0:01:48.774 ********* 2025-05-14 14:36:27.554516 | orchestrator | 2025-05-14 14:36:27.554523 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 14:36:27.554531 | orchestrator | Wednesday 14 May 2025 14:35:58 +0000 (0:00:00.359) 0:01:49.133 ********* 2025-05-14 14:36:27.554542 | orchestrator | 2025-05-14 14:36:27.554549 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 14:36:27.554556 | orchestrator | Wednesday 14 May 2025 14:35:58 +0000 (0:00:00.072) 0:01:49.206 ********* 2025-05-14 14:36:27.554564 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.554571 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.554578 | orchestrator | 2025-05-14 14:36:27.554585 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 14:36:27.554593 | orchestrator | Wednesday 14 May 2025 14:36:05 +0000 (0:00:06.330) 0:01:55.536 ********* 2025-05-14 14:36:27.554600 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.554607 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.554615 | orchestrator | 2025-05-14 14:36:27.554622 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 14:36:27.554629 | orchestrator | Wednesday 14 May 2025 14:36:11 +0000 (0:00:06.549) 0:02:02.086 ********* 2025-05-14 14:36:27.554636 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:36:27.554644 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:36:27.554651 | orchestrator | 2025-05-14 14:36:27.554658 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 14:36:27.554665 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:06.317) 0:02:08.404 ********* 2025-05-14 14:36:27.554672 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:36:27.554679 | orchestrator | 2025-05-14 14:36:27.554687 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 14:36:27.554694 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:00.425) 0:02:08.829 ********* 2025-05-14 14:36:27.554701 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.554708 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554716 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554723 | orchestrator | 2025-05-14 14:36:27.554730 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 14:36:27.554738 | orchestrator | Wednesday 14 May 2025 14:36:19 +0000 (0:00:00.805) 0:02:09.635 ********* 2025-05-14 14:36:27.554745 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.554752 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.554759 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.554766 | orchestrator | 2025-05-14 14:36:27.554774 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 14:36:27.554781 | orchestrator | Wednesday 14 May 2025 14:36:20 +0000 (0:00:00.625) 0:02:10.260 ********* 2025-05-14 14:36:27.554788 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.554795 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554802 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554810 | orchestrator | 2025-05-14 14:36:27.554817 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 14:36:27.554824 | orchestrator | Wednesday 14 May 2025 14:36:21 +0000 (0:00:01.305) 0:02:11.566 ********* 2025-05-14 14:36:27.554831 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:36:27.554838 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:36:27.554846 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:36:27.554853 | orchestrator | 2025-05-14 14:36:27.554860 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 14:36:27.554867 | orchestrator | Wednesday 14 May 2025 14:36:22 +0000 (0:00:00.970) 0:02:12.536 ********* 2025-05-14 14:36:27.554875 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.554882 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554889 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554896 | orchestrator | 2025-05-14 14:36:27.554904 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 14:36:27.554911 | orchestrator | Wednesday 14 May 2025 14:36:23 +0000 (0:00:01.037) 0:02:13.574 ********* 2025-05-14 14:36:27.554918 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:36:27.554925 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:36:27.554937 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:36:27.554944 | orchestrator | 2025-05-14 14:36:27.554951 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:36:27.554959 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-14 14:36:27.554966 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 14:36:27.554977 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 14:36:27.554985 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:36:27.554993 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:36:27.555000 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:36:27.555007 | orchestrator | 2025-05-14 14:36:27.555014 | orchestrator | 2025-05-14 14:36:27.555021 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:36:27.555029 | orchestrator | Wednesday 14 May 2025 14:36:24 +0000 (0:00:01.548) 0:02:15.123 ********* 2025-05-14 14:36:27.555039 | orchestrator | =============================================================================== 2025-05-14 14:36:27.555047 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.07s 2025-05-14 14:36:27.555054 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.69s 2025-05-14 14:36:27.555062 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.35s 2025-05-14 14:36:27.555069 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.11s 2025-05-14 14:36:27.555076 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.07s 2025-05-14 14:36:27.555084 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.04s 2025-05-14 14:36:27.555091 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.82s 2025-05-14 14:36:27.555098 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.07s 2025-05-14 14:36:27.555105 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.99s 2025-05-14 14:36:27.555112 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.91s 2025-05-14 14:36:27.555120 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.39s 2025-05-14 14:36:27.555127 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.17s 2025-05-14 14:36:27.555134 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.98s 2025-05-14 14:36:27.555141 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.85s 2025-05-14 14:36:27.555148 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2025-05-14 14:36:27.555155 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.64s 2025-05-14 14:36:27.555163 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.55s 2025-05-14 14:36:27.555181 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.36s 2025-05-14 14:36:27.555189 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.31s 2025-05-14 14:36:27.555196 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.31s 2025-05-14 14:36:27.555204 | orchestrator | 2025-05-14 14:36:27 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:27.555211 | orchestrator | 2025-05-14 14:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:30.596134 | orchestrator | 2025-05-14 14:36:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:30.596423 | orchestrator | 2025-05-14 14:36:30 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:30.596840 | orchestrator | 2025-05-14 14:36:30 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:30.596862 | orchestrator | 2025-05-14 14:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:33.624424 | orchestrator | 2025-05-14 14:36:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:33.626706 | orchestrator | 2025-05-14 14:36:33 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:33.628288 | orchestrator | 2025-05-14 14:36:33 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:33.628626 | orchestrator | 2025-05-14 14:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:36.670815 | orchestrator | 2025-05-14 14:36:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:36.671087 | orchestrator | 2025-05-14 14:36:36 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:36.674357 | orchestrator | 2025-05-14 14:36:36 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:36.674742 | orchestrator | 2025-05-14 14:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:39.718751 | orchestrator | 2025-05-14 14:36:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:39.718945 | orchestrator | 2025-05-14 14:36:39 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:39.719504 | orchestrator | 2025-05-14 14:36:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:39.719529 | orchestrator | 2025-05-14 14:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:42.773002 | orchestrator | 2025-05-14 14:36:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:42.773440 | orchestrator | 2025-05-14 14:36:42 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:42.773928 | orchestrator | 2025-05-14 14:36:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:42.773976 | orchestrator | 2025-05-14 14:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:45.820304 | orchestrator | 2025-05-14 14:36:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:45.820956 | orchestrator | 2025-05-14 14:36:45 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:45.822729 | orchestrator | 2025-05-14 14:36:45 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:45.822777 | orchestrator | 2025-05-14 14:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:48.866519 | orchestrator | 2025-05-14 14:36:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:48.869299 | orchestrator | 2025-05-14 14:36:48 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:48.869331 | orchestrator | 2025-05-14 14:36:48 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:48.869343 | orchestrator | 2025-05-14 14:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:51.912212 | orchestrator | 2025-05-14 14:36:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:51.914612 | orchestrator | 2025-05-14 14:36:51 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:51.917377 | orchestrator | 2025-05-14 14:36:51 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:51.917399 | orchestrator | 2025-05-14 14:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:54.957025 | orchestrator | 2025-05-14 14:36:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:54.961360 | orchestrator | 2025-05-14 14:36:54 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:54.961394 | orchestrator | 2025-05-14 14:36:54 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:54.961407 | orchestrator | 2025-05-14 14:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:36:58.019796 | orchestrator | 2025-05-14 14:36:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:36:58.020259 | orchestrator | 2025-05-14 14:36:58 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:36:58.021287 | orchestrator | 2025-05-14 14:36:58 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:36:58.021354 | orchestrator | 2025-05-14 14:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:01.075324 | orchestrator | 2025-05-14 14:37:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:01.075686 | orchestrator | 2025-05-14 14:37:01 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:01.078307 | orchestrator | 2025-05-14 14:37:01 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:01.079670 | orchestrator | 2025-05-14 14:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:04.120800 | orchestrator | 2025-05-14 14:37:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:04.127251 | orchestrator | 2025-05-14 14:37:04 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:04.127294 | orchestrator | 2025-05-14 14:37:04 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:04.127307 | orchestrator | 2025-05-14 14:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:07.187364 | orchestrator | 2025-05-14 14:37:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:07.189123 | orchestrator | 2025-05-14 14:37:07 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:07.191669 | orchestrator | 2025-05-14 14:37:07 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:07.191700 | orchestrator | 2025-05-14 14:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:10.250752 | orchestrator | 2025-05-14 14:37:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:10.251662 | orchestrator | 2025-05-14 14:37:10 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:10.257227 | orchestrator | 2025-05-14 14:37:10 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:10.257280 | orchestrator | 2025-05-14 14:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:13.311272 | orchestrator | 2025-05-14 14:37:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:13.311613 | orchestrator | 2025-05-14 14:37:13 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:13.313257 | orchestrator | 2025-05-14 14:37:13 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:13.313359 | orchestrator | 2025-05-14 14:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:16.374209 | orchestrator | 2025-05-14 14:37:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:16.374881 | orchestrator | 2025-05-14 14:37:16 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:16.376987 | orchestrator | 2025-05-14 14:37:16 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:16.377308 | orchestrator | 2025-05-14 14:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:19.424950 | orchestrator | 2025-05-14 14:37:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:19.425124 | orchestrator | 2025-05-14 14:37:19 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:19.425530 | orchestrator | 2025-05-14 14:37:19 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:19.425871 | orchestrator | 2025-05-14 14:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:22.489549 | orchestrator | 2025-05-14 14:37:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:22.490062 | orchestrator | 2025-05-14 14:37:22 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:22.492369 | orchestrator | 2025-05-14 14:37:22 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:22.492599 | orchestrator | 2025-05-14 14:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:25.550253 | orchestrator | 2025-05-14 14:37:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:25.550364 | orchestrator | 2025-05-14 14:37:25 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:25.552384 | orchestrator | 2025-05-14 14:37:25 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:25.552434 | orchestrator | 2025-05-14 14:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:28.611383 | orchestrator | 2025-05-14 14:37:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:28.612797 | orchestrator | 2025-05-14 14:37:28 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:28.614537 | orchestrator | 2025-05-14 14:37:28 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:28.614770 | orchestrator | 2025-05-14 14:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:31.669186 | orchestrator | 2025-05-14 14:37:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:31.669681 | orchestrator | 2025-05-14 14:37:31 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:31.672329 | orchestrator | 2025-05-14 14:37:31 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:31.672954 | orchestrator | 2025-05-14 14:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:34.725862 | orchestrator | 2025-05-14 14:37:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:34.726870 | orchestrator | 2025-05-14 14:37:34 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:34.727963 | orchestrator | 2025-05-14 14:37:34 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:34.727996 | orchestrator | 2025-05-14 14:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:37.797868 | orchestrator | 2025-05-14 14:37:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:37.797970 | orchestrator | 2025-05-14 14:37:37 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:37.798213 | orchestrator | 2025-05-14 14:37:37 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:37.798348 | orchestrator | 2025-05-14 14:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:40.851376 | orchestrator | 2025-05-14 14:37:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:40.854378 | orchestrator | 2025-05-14 14:37:40 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:40.855176 | orchestrator | 2025-05-14 14:37:40 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:40.855270 | orchestrator | 2025-05-14 14:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:43.903936 | orchestrator | 2025-05-14 14:37:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:43.904741 | orchestrator | 2025-05-14 14:37:43 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:43.908524 | orchestrator | 2025-05-14 14:37:43 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:43.908586 | orchestrator | 2025-05-14 14:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:46.960174 | orchestrator | 2025-05-14 14:37:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:46.960251 | orchestrator | 2025-05-14 14:37:46 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:46.960257 | orchestrator | 2025-05-14 14:37:46 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:46.960263 | orchestrator | 2025-05-14 14:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:49.994243 | orchestrator | 2025-05-14 14:37:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:49.994552 | orchestrator | 2025-05-14 14:37:49 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:49.995288 | orchestrator | 2025-05-14 14:37:49 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:49.995306 | orchestrator | 2025-05-14 14:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:53.032039 | orchestrator | 2025-05-14 14:37:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:53.035642 | orchestrator | 2025-05-14 14:37:53 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:53.037044 | orchestrator | 2025-05-14 14:37:53 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:53.037273 | orchestrator | 2025-05-14 14:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:56.079975 | orchestrator | 2025-05-14 14:37:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:56.080065 | orchestrator | 2025-05-14 14:37:56 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:56.081314 | orchestrator | 2025-05-14 14:37:56 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:56.081340 | orchestrator | 2025-05-14 14:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:37:59.110564 | orchestrator | 2025-05-14 14:37:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:37:59.112047 | orchestrator | 2025-05-14 14:37:59 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:37:59.112510 | orchestrator | 2025-05-14 14:37:59 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:37:59.113849 | orchestrator | 2025-05-14 14:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:02.153344 | orchestrator | 2025-05-14 14:38:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:02.154933 | orchestrator | 2025-05-14 14:38:02 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:02.157152 | orchestrator | 2025-05-14 14:38:02 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:02.157189 | orchestrator | 2025-05-14 14:38:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:05.212816 | orchestrator | 2025-05-14 14:38:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:05.212955 | orchestrator | 2025-05-14 14:38:05 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:05.213310 | orchestrator | 2025-05-14 14:38:05 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:05.213360 | orchestrator | 2025-05-14 14:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:08.253092 | orchestrator | 2025-05-14 14:38:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:08.260111 | orchestrator | 2025-05-14 14:38:08 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:08.263560 | orchestrator | 2025-05-14 14:38:08 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:08.263593 | orchestrator | 2025-05-14 14:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:11.327521 | orchestrator | 2025-05-14 14:38:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:11.329462 | orchestrator | 2025-05-14 14:38:11 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:11.329495 | orchestrator | 2025-05-14 14:38:11 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:11.329506 | orchestrator | 2025-05-14 14:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:14.370737 | orchestrator | 2025-05-14 14:38:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:14.371469 | orchestrator | 2025-05-14 14:38:14 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:14.372832 | orchestrator | 2025-05-14 14:38:14 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:14.373087 | orchestrator | 2025-05-14 14:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:17.422328 | orchestrator | 2025-05-14 14:38:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:17.423670 | orchestrator | 2025-05-14 14:38:17 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:17.425757 | orchestrator | 2025-05-14 14:38:17 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:17.426204 | orchestrator | 2025-05-14 14:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:20.467781 | orchestrator | 2025-05-14 14:38:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:20.468435 | orchestrator | 2025-05-14 14:38:20 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:20.468834 | orchestrator | 2025-05-14 14:38:20 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:20.468865 | orchestrator | 2025-05-14 14:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:23.523416 | orchestrator | 2025-05-14 14:38:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:23.523529 | orchestrator | 2025-05-14 14:38:23 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:23.525932 | orchestrator | 2025-05-14 14:38:23 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:23.526077 | orchestrator | 2025-05-14 14:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:26.571556 | orchestrator | 2025-05-14 14:38:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:26.572600 | orchestrator | 2025-05-14 14:38:26 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:26.574603 | orchestrator | 2025-05-14 14:38:26 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:26.575011 | orchestrator | 2025-05-14 14:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:29.622684 | orchestrator | 2025-05-14 14:38:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:29.624903 | orchestrator | 2025-05-14 14:38:29 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:29.626899 | orchestrator | 2025-05-14 14:38:29 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:29.626998 | orchestrator | 2025-05-14 14:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:32.694644 | orchestrator | 2025-05-14 14:38:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:32.696045 | orchestrator | 2025-05-14 14:38:32 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:32.698609 | orchestrator | 2025-05-14 14:38:32 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:32.698928 | orchestrator | 2025-05-14 14:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:35.753270 | orchestrator | 2025-05-14 14:38:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:35.753838 | orchestrator | 2025-05-14 14:38:35 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:35.754848 | orchestrator | 2025-05-14 14:38:35 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:35.755194 | orchestrator | 2025-05-14 14:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:38.797586 | orchestrator | 2025-05-14 14:38:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:38.799990 | orchestrator | 2025-05-14 14:38:38 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:38.800925 | orchestrator | 2025-05-14 14:38:38 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:38.801016 | orchestrator | 2025-05-14 14:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:41.867341 | orchestrator | 2025-05-14 14:38:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:41.867896 | orchestrator | 2025-05-14 14:38:41 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:41.869031 | orchestrator | 2025-05-14 14:38:41 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:41.869104 | orchestrator | 2025-05-14 14:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:44.933875 | orchestrator | 2025-05-14 14:38:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:44.935835 | orchestrator | 2025-05-14 14:38:44 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:44.936959 | orchestrator | 2025-05-14 14:38:44 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:44.937301 | orchestrator | 2025-05-14 14:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:47.985478 | orchestrator | 2025-05-14 14:38:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:47.987364 | orchestrator | 2025-05-14 14:38:47 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:47.990091 | orchestrator | 2025-05-14 14:38:47 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:47.990118 | orchestrator | 2025-05-14 14:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:51.054000 | orchestrator | 2025-05-14 14:38:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:51.054580 | orchestrator | 2025-05-14 14:38:51 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:51.056642 | orchestrator | 2025-05-14 14:38:51 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:51.056680 | orchestrator | 2025-05-14 14:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:54.109922 | orchestrator | 2025-05-14 14:38:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:54.112692 | orchestrator | 2025-05-14 14:38:54 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:54.114183 | orchestrator | 2025-05-14 14:38:54 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:54.114751 | orchestrator | 2025-05-14 14:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:38:57.173307 | orchestrator | 2025-05-14 14:38:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:38:57.173927 | orchestrator | 2025-05-14 14:38:57 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:38:57.175213 | orchestrator | 2025-05-14 14:38:57 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:38:57.175253 | orchestrator | 2025-05-14 14:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:00.223809 | orchestrator | 2025-05-14 14:39:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:00.224956 | orchestrator | 2025-05-14 14:39:00 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:00.226243 | orchestrator | 2025-05-14 14:39:00 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:00.226310 | orchestrator | 2025-05-14 14:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:03.275944 | orchestrator | 2025-05-14 14:39:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:03.276684 | orchestrator | 2025-05-14 14:39:03 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:03.278178 | orchestrator | 2025-05-14 14:39:03 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:03.278297 | orchestrator | 2025-05-14 14:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:06.352070 | orchestrator | 2025-05-14 14:39:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:06.355212 | orchestrator | 2025-05-14 14:39:06 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:06.356710 | orchestrator | 2025-05-14 14:39:06 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:06.357055 | orchestrator | 2025-05-14 14:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:09.397330 | orchestrator | 2025-05-14 14:39:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:09.398002 | orchestrator | 2025-05-14 14:39:09 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:09.398533 | orchestrator | 2025-05-14 14:39:09 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:09.398610 | orchestrator | 2025-05-14 14:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:12.450430 | orchestrator | 2025-05-14 14:39:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:12.450643 | orchestrator | 2025-05-14 14:39:12 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:12.451843 | orchestrator | 2025-05-14 14:39:12 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:12.451868 | orchestrator | 2025-05-14 14:39:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:15.503371 | orchestrator | 2025-05-14 14:39:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:15.504810 | orchestrator | 2025-05-14 14:39:15 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:15.507225 | orchestrator | 2025-05-14 14:39:15 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:15.507278 | orchestrator | 2025-05-14 14:39:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:18.550161 | orchestrator | 2025-05-14 14:39:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:18.551248 | orchestrator | 2025-05-14 14:39:18 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:18.553987 | orchestrator | 2025-05-14 14:39:18 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:18.554134 | orchestrator | 2025-05-14 14:39:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:21.604748 | orchestrator | 2025-05-14 14:39:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:21.604877 | orchestrator | 2025-05-14 14:39:21 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:21.605647 | orchestrator | 2025-05-14 14:39:21 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:21.605699 | orchestrator | 2025-05-14 14:39:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:24.651998 | orchestrator | 2025-05-14 14:39:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:24.653702 | orchestrator | 2025-05-14 14:39:24 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:24.655617 | orchestrator | 2025-05-14 14:39:24 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:24.655865 | orchestrator | 2025-05-14 14:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:27.709408 | orchestrator | 2025-05-14 14:39:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:27.710513 | orchestrator | 2025-05-14 14:39:27 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:27.712566 | orchestrator | 2025-05-14 14:39:27 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:27.712617 | orchestrator | 2025-05-14 14:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:30.757632 | orchestrator | 2025-05-14 14:39:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:30.757742 | orchestrator | 2025-05-14 14:39:30 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:30.757756 | orchestrator | 2025-05-14 14:39:30 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:30.757768 | orchestrator | 2025-05-14 14:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:33.811184 | orchestrator | 2025-05-14 14:39:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:33.811353 | orchestrator | 2025-05-14 14:39:33 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:33.812442 | orchestrator | 2025-05-14 14:39:33 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:33.812458 | orchestrator | 2025-05-14 14:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:36.854125 | orchestrator | 2025-05-14 14:39:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:36.857612 | orchestrator | 2025-05-14 14:39:36 | INFO  | Task 9a87c3ae-afba-40d0-a678-9c17d9c79546 is in state STARTED 2025-05-14 14:39:36.859666 | orchestrator | 2025-05-14 14:39:36 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:36.861706 | orchestrator | 2025-05-14 14:39:36 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:36.861776 | orchestrator | 2025-05-14 14:39:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:39.919282 | orchestrator | 2025-05-14 14:39:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:39.919459 | orchestrator | 2025-05-14 14:39:39 | INFO  | Task 9a87c3ae-afba-40d0-a678-9c17d9c79546 is in state STARTED 2025-05-14 14:39:39.921869 | orchestrator | 2025-05-14 14:39:39 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:39.922179 | orchestrator | 2025-05-14 14:39:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:39.922208 | orchestrator | 2025-05-14 14:39:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:42.970836 | orchestrator | 2025-05-14 14:39:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:42.971544 | orchestrator | 2025-05-14 14:39:42 | INFO  | Task 9a87c3ae-afba-40d0-a678-9c17d9c79546 is in state STARTED 2025-05-14 14:39:42.972104 | orchestrator | 2025-05-14 14:39:42 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:42.972757 | orchestrator | 2025-05-14 14:39:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:42.972854 | orchestrator | 2025-05-14 14:39:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:46.015423 | orchestrator | 2025-05-14 14:39:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:46.017030 | orchestrator | 2025-05-14 14:39:46 | INFO  | Task 9a87c3ae-afba-40d0-a678-9c17d9c79546 is in state STARTED 2025-05-14 14:39:46.018352 | orchestrator | 2025-05-14 14:39:46 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:46.018918 | orchestrator | 2025-05-14 14:39:46 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:46.019030 | orchestrator | 2025-05-14 14:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:49.060435 | orchestrator | 2025-05-14 14:39:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:49.061767 | orchestrator | 2025-05-14 14:39:49 | INFO  | Task 9a87c3ae-afba-40d0-a678-9c17d9c79546 is in state SUCCESS 2025-05-14 14:39:49.065156 | orchestrator | 2025-05-14 14:39:49 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:49.066917 | orchestrator | 2025-05-14 14:39:49 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:49.066949 | orchestrator | 2025-05-14 14:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:52.112942 | orchestrator | 2025-05-14 14:39:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:52.115349 | orchestrator | 2025-05-14 14:39:52 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:52.115490 | orchestrator | 2025-05-14 14:39:52 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:52.115508 | orchestrator | 2025-05-14 14:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:55.161385 | orchestrator | 2025-05-14 14:39:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:55.161498 | orchestrator | 2025-05-14 14:39:55 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:55.161989 | orchestrator | 2025-05-14 14:39:55 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:55.162224 | orchestrator | 2025-05-14 14:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:39:58.211933 | orchestrator | 2025-05-14 14:39:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:39:58.212014 | orchestrator | 2025-05-14 14:39:58 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:39:58.212196 | orchestrator | 2025-05-14 14:39:58 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:39:58.212256 | orchestrator | 2025-05-14 14:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:01.266186 | orchestrator | 2025-05-14 14:40:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:01.268358 | orchestrator | 2025-05-14 14:40:01 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:40:01.270165 | orchestrator | 2025-05-14 14:40:01 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:01.270417 | orchestrator | 2025-05-14 14:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:04.321862 | orchestrator | 2025-05-14 14:40:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:04.322298 | orchestrator | 2025-05-14 14:40:04 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:40:04.325315 | orchestrator | 2025-05-14 14:40:04 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:04.325486 | orchestrator | 2025-05-14 14:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:07.367959 | orchestrator | 2025-05-14 14:40:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:07.369612 | orchestrator | 2025-05-14 14:40:07 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state STARTED 2025-05-14 14:40:07.372556 | orchestrator | 2025-05-14 14:40:07 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:07.372615 | orchestrator | 2025-05-14 14:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:10.427236 | orchestrator | 2025-05-14 14:40:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:10.429399 | orchestrator | 2025-05-14 14:40:10 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:10.436439 | orchestrator | 2025-05-14 14:40:10 | INFO  | Task 9031a252-8a0f-4142-aff6-98621039fa6a is in state SUCCESS 2025-05-14 14:40:10.438526 | orchestrator | 2025-05-14 14:40:10.438572 | orchestrator | None 2025-05-14 14:40:10.438584 | orchestrator | 2025-05-14 14:40:10.438595 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:40:10.438608 | orchestrator | 2025-05-14 14:40:10.438619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:40:10.438631 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.489) 0:00:00.490 ********* 2025-05-14 14:40:10.438643 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.438654 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.438685 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.438696 | orchestrator | 2025-05-14 14:40:10.438708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:40:10.438719 | orchestrator | Wednesday 14 May 2025 14:32:56 +0000 (0:00:00.456) 0:00:00.946 ********* 2025-05-14 14:40:10.438731 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-14 14:40:10.438743 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-14 14:40:10.438753 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-14 14:40:10.438765 | orchestrator | 2025-05-14 14:40:10.438776 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-14 14:40:10.438787 | orchestrator | 2025-05-14 14:40:10.438798 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 14:40:10.438809 | orchestrator | Wednesday 14 May 2025 14:32:57 +0000 (0:00:00.318) 0:00:01.265 ********* 2025-05-14 14:40:10.438820 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.438831 | orchestrator | 2025-05-14 14:40:10.438842 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-14 14:40:10.438853 | orchestrator | Wednesday 14 May 2025 14:32:57 +0000 (0:00:00.631) 0:00:01.896 ********* 2025-05-14 14:40:10.438865 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.438877 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.438888 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.438899 | orchestrator | 2025-05-14 14:40:10.438918 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 14:40:10.438929 | orchestrator | Wednesday 14 May 2025 14:32:59 +0000 (0:00:02.110) 0:00:04.007 ********* 2025-05-14 14:40:10.438941 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.438952 | orchestrator | 2025-05-14 14:40:10.438963 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-14 14:40:10.439002 | orchestrator | Wednesday 14 May 2025 14:33:00 +0000 (0:00:00.565) 0:00:04.572 ********* 2025-05-14 14:40:10.439013 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.439024 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.439035 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.439046 | orchestrator | 2025-05-14 14:40:10.439057 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-14 14:40:10.439068 | orchestrator | Wednesday 14 May 2025 14:33:02 +0000 (0:00:01.550) 0:00:06.123 ********* 2025-05-14 14:40:10.439079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439180 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439191 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439203 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 14:40:10.439215 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 14:40:10.439229 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 14:40:10.439241 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 14:40:10.439253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 14:40:10.439265 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 14:40:10.439276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 14:40:10.439288 | orchestrator | 2025-05-14 14:40:10.439300 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 14:40:10.439313 | orchestrator | Wednesday 14 May 2025 14:33:04 +0000 (0:00:02.357) 0:00:08.480 ********* 2025-05-14 14:40:10.439326 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 14:40:10.439348 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 14:40:10.439361 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 14:40:10.439380 | orchestrator | 2025-05-14 14:40:10.439393 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 14:40:10.439405 | orchestrator | Wednesday 14 May 2025 14:33:05 +0000 (0:00:01.001) 0:00:09.482 ********* 2025-05-14 14:40:10.439418 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 14:40:10.439430 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 14:40:10.439441 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 14:40:10.439453 | orchestrator | 2025-05-14 14:40:10.439465 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 14:40:10.439477 | orchestrator | Wednesday 14 May 2025 14:33:07 +0000 (0:00:01.641) 0:00:11.124 ********* 2025-05-14 14:40:10.439496 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-14 14:40:10.439508 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.439535 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-14 14:40:10.439547 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.439558 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-14 14:40:10.439569 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.439579 | orchestrator | 2025-05-14 14:40:10.439590 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-14 14:40:10.439601 | orchestrator | Wednesday 14 May 2025 14:33:07 +0000 (0:00:00.849) 0:00:11.973 ********* 2025-05-14 14:40:10.439660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.439765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.439790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.439802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.439814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.439826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.439838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.439849 | orchestrator | 2025-05-14 14:40:10.439861 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-14 14:40:10.439872 | orchestrator | Wednesday 14 May 2025 14:33:10 +0000 (0:00:03.102) 0:00:15.076 ********* 2025-05-14 14:40:10.439883 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.439900 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.439911 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.439922 | orchestrator | 2025-05-14 14:40:10.439938 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-14 14:40:10.439957 | orchestrator | Wednesday 14 May 2025 14:33:12 +0000 (0:00:01.931) 0:00:17.008 ********* 2025-05-14 14:40:10.439969 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-14 14:40:10.439980 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-14 14:40:10.439991 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-14 14:40:10.440001 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-14 14:40:10.440011 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-14 14:40:10.440022 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-14 14:40:10.440032 | orchestrator | 2025-05-14 14:40:10.440043 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-14 14:40:10.440053 | orchestrator | Wednesday 14 May 2025 14:33:16 +0000 (0:00:03.174) 0:00:20.182 ********* 2025-05-14 14:40:10.440064 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.440075 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.440108 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.440120 | orchestrator | 2025-05-14 14:40:10.440131 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-14 14:40:10.440142 | orchestrator | Wednesday 14 May 2025 14:33:17 +0000 (0:00:01.607) 0:00:21.790 ********* 2025-05-14 14:40:10.440153 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.440164 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.440174 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.440185 | orchestrator | 2025-05-14 14:40:10.440195 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-14 14:40:10.440206 | orchestrator | Wednesday 14 May 2025 14:33:19 +0000 (0:00:01.745) 0:00:23.536 ********* 2025-05-14 14:40:10.440232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.440245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.440257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.440268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.440295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.440308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.440326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440362 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.440374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440405 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.440423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440435 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.440454 | orchestrator | 2025-05-14 14:40:10.440465 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-14 14:40:10.440476 | orchestrator | Wednesday 14 May 2025 14:33:22 +0000 (0:00:03.275) 0:00:26.812 ********* 2025-05-14 14:40:10.440493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.440660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.440693 | orchestrator | 2025-05-14 14:40:10.440704 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-14 14:40:10.440715 | orchestrator | Wednesday 14 May 2025 14:33:26 +0000 (0:00:03.802) 0:00:30.614 ********* 2025-05-14 14:40:10.440727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.440788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.441716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.441742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.441759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.441772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.441784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.441808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.441819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.441831 | orchestrator | 2025-05-14 14:40:10.441842 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-14 14:40:10.441853 | orchestrator | Wednesday 14 May 2025 14:33:29 +0000 (0:00:02.688) 0:00:33.303 ********* 2025-05-14 14:40:10.441872 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 14:40:10.441885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 14:40:10.441896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 14:40:10.441906 | orchestrator | 2025-05-14 14:40:10.441917 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-14 14:40:10.441928 | orchestrator | Wednesday 14 May 2025 14:33:31 +0000 (0:00:02.461) 0:00:35.765 ********* 2025-05-14 14:40:10.441939 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 14:40:10.441959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 14:40:10.441971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 14:40:10.441982 | orchestrator | 2025-05-14 14:40:10.441993 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-14 14:40:10.442004 | orchestrator | Wednesday 14 May 2025 14:33:36 +0000 (0:00:04.467) 0:00:40.232 ********* 2025-05-14 14:40:10.442015 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.442109 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.442120 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.442131 | orchestrator | 2025-05-14 14:40:10.442141 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-14 14:40:10.442166 | orchestrator | Wednesday 14 May 2025 14:33:37 +0000 (0:00:01.230) 0:00:41.463 ********* 2025-05-14 14:40:10.442178 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 14:40:10.442211 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 14:40:10.442222 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 14:40:10.442240 | orchestrator | 2025-05-14 14:40:10.442252 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-14 14:40:10.442262 | orchestrator | Wednesday 14 May 2025 14:33:39 +0000 (0:00:02.601) 0:00:44.064 ********* 2025-05-14 14:40:10.442273 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 14:40:10.442284 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 14:40:10.442295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 14:40:10.442306 | orchestrator | 2025-05-14 14:40:10.442317 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-14 14:40:10.442327 | orchestrator | Wednesday 14 May 2025 14:33:41 +0000 (0:00:01.967) 0:00:46.032 ********* 2025-05-14 14:40:10.442338 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-14 14:40:10.442349 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-14 14:40:10.442360 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-14 14:40:10.442370 | orchestrator | 2025-05-14 14:40:10.442381 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-14 14:40:10.442391 | orchestrator | Wednesday 14 May 2025 14:33:44 +0000 (0:00:02.281) 0:00:48.313 ********* 2025-05-14 14:40:10.442402 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-14 14:40:10.442412 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-14 14:40:10.442423 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-14 14:40:10.442434 | orchestrator | 2025-05-14 14:40:10.442444 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 14:40:10.442455 | orchestrator | Wednesday 14 May 2025 14:33:45 +0000 (0:00:01.784) 0:00:50.098 ********* 2025-05-14 14:40:10.442474 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.442485 | orchestrator | 2025-05-14 14:40:10.442495 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-14 14:40:10.442506 | orchestrator | Wednesday 14 May 2025 14:33:46 +0000 (0:00:00.600) 0:00:50.698 ********* 2025-05-14 14:40:10.442517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.442630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.442649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.442662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.442683 | orchestrator | 2025-05-14 14:40:10.442694 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-14 14:40:10.442705 | orchestrator | Wednesday 14 May 2025 14:33:49 +0000 (0:00:03.019) 0:00:53.717 ********* 2025-05-14 14:40:10.442722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.442734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.442745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.442757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.442768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.442780 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.442811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.442831 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.442843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.442860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.442872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.442883 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.442894 | orchestrator | 2025-05-14 14:40:10.442905 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-14 14:40:10.442916 | orchestrator | Wednesday 14 May 2025 14:33:51 +0000 (0:00:01.557) 0:00:55.275 ********* 2025-05-14 14:40:10.442927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.442939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.442956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.442983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.443000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.443018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.443030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.443041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 14:40:10.443059 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.443070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 14:40:10.443082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 14:40:10.443132 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.443144 | orchestrator | 2025-05-14 14:40:10.443155 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-14 14:40:10.443180 | orchestrator | Wednesday 14 May 2025 14:33:52 +0000 (0:00:01.181) 0:00:56.457 ********* 2025-05-14 14:40:10.443192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 14:40:10.443203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 14:40:10.443213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 14:40:10.443224 | orchestrator | 2025-05-14 14:40:10.443234 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-14 14:40:10.443245 | orchestrator | Wednesday 14 May 2025 14:33:54 +0000 (0:00:01.973) 0:00:58.430 ********* 2025-05-14 14:40:10.443256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 14:40:10.443266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 14:40:10.443277 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 14:40:10.443287 | orchestrator | 2025-05-14 14:40:10.443298 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-14 14:40:10.443309 | orchestrator | Wednesday 14 May 2025 14:33:56 +0000 (0:00:01.961) 0:01:00.391 ********* 2025-05-14 14:40:10.443320 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:40:10.443331 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:40:10.443348 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:40:10.443358 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:40:10.443369 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.443380 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:40:10.443390 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.443401 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:40:10.443412 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.443422 | orchestrator | 2025-05-14 14:40:10.443433 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-14 14:40:10.443444 | orchestrator | Wednesday 14 May 2025 14:33:59 +0000 (0:00:02.817) 0:01:03.209 ********* 2025-05-14 14:40:10.443455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 14:40:10.443554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.443565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.443585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.443602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.443614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 14:40:10.443631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf', '__omit_place_holder__f9b48d3b6533dc59f9a31486f5f663e7459aaebf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 14:40:10.443643 | orchestrator | 2025-05-14 14:40:10.443654 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-14 14:40:10.443665 | orchestrator | Wednesday 14 May 2025 14:34:02 +0000 (0:00:03.396) 0:01:06.605 ********* 2025-05-14 14:40:10.443675 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.443686 | orchestrator | 2025-05-14 14:40:10.443697 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-14 14:40:10.443708 | orchestrator | Wednesday 14 May 2025 14:34:03 +0000 (0:00:00.642) 0:01:07.247 ********* 2025-05-14 14:40:10.443720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 14:40:10.443750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.443763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 14:40:10.443816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.443828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 14:40:10.443907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.443920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.443949 | orchestrator | 2025-05-14 14:40:10.443960 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-14 14:40:10.443971 | orchestrator | Wednesday 14 May 2025 14:34:06 +0000 (0:00:03.691) 0:01:10.939 ********* 2025-05-14 14:40:10.443983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 14:40:10.444002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.444014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444044 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.444056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 14:40:10.444073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.444143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444168 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.444180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 14:40:10.444200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.444212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.444255 | orchestrator | 2025-05-14 14:40:10.444265 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-14 14:40:10.444275 | orchestrator | Wednesday 14 May 2025 14:34:07 +0000 (0:00:00.949) 0:01:11.888 ********* 2025-05-14 14:40:10.444285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444308 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.444318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444337 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.444347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 14:40:10.444367 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.444376 | orchestrator | 2025-05-14 14:40:10.444386 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-14 14:40:10.444402 | orchestrator | Wednesday 14 May 2025 14:34:09 +0000 (0:00:01.307) 0:01:13.196 ********* 2025-05-14 14:40:10.444413 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.444423 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.444432 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.444442 | orchestrator | 2025-05-14 14:40:10.444451 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-14 14:40:10.444461 | orchestrator | Wednesday 14 May 2025 14:34:10 +0000 (0:00:01.367) 0:01:14.564 ********* 2025-05-14 14:40:10.444471 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.444480 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.444490 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.444499 | orchestrator | 2025-05-14 14:40:10.444509 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-14 14:40:10.444519 | orchestrator | Wednesday 14 May 2025 14:34:12 +0000 (0:00:02.143) 0:01:16.708 ********* 2025-05-14 14:40:10.444528 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.444538 | orchestrator | 2025-05-14 14:40:10.444547 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-14 14:40:10.444557 | orchestrator | Wednesday 14 May 2025 14:34:13 +0000 (0:00:00.936) 0:01:17.644 ********* 2025-05-14 14:40:10.444575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.444605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.444639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.444697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444718 | orchestrator | 2025-05-14 14:40:10.444729 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-14 14:40:10.444738 | orchestrator | Wednesday 14 May 2025 14:34:19 +0000 (0:00:05.663) 0:01:23.307 ********* 2025-05-14 14:40:10.444749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.444767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.444810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.444821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.444842 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.446157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.446204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.446221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.446239 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.446249 | orchestrator | 2025-05-14 14:40:10.446259 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-14 14:40:10.446269 | orchestrator | Wednesday 14 May 2025 14:34:20 +0000 (0:00:01.546) 0:01:24.854 ********* 2025-05-14 14:40:10.446280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446303 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.446319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446340 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.446350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 14:40:10.446371 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.446382 | orchestrator | 2025-05-14 14:40:10.446392 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-14 14:40:10.446412 | orchestrator | Wednesday 14 May 2025 14:34:21 +0000 (0:00:00.984) 0:01:25.839 ********* 2025-05-14 14:40:10.446429 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.446439 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.446449 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.446459 | orchestrator | 2025-05-14 14:40:10.446469 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-14 14:40:10.446479 | orchestrator | Wednesday 14 May 2025 14:34:23 +0000 (0:00:01.388) 0:01:27.228 ********* 2025-05-14 14:40:10.446489 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.446499 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.446509 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.446519 | orchestrator | 2025-05-14 14:40:10.446529 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-14 14:40:10.446539 | orchestrator | Wednesday 14 May 2025 14:34:25 +0000 (0:00:02.169) 0:01:29.397 ********* 2025-05-14 14:40:10.446549 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.446559 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.446570 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.446580 | orchestrator | 2025-05-14 14:40:10.446598 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-14 14:40:10.446608 | orchestrator | Wednesday 14 May 2025 14:34:25 +0000 (0:00:00.303) 0:01:29.701 ********* 2025-05-14 14:40:10.446618 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.446646 | orchestrator | 2025-05-14 14:40:10.446656 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-14 14:40:10.446666 | orchestrator | Wednesday 14 May 2025 14:34:27 +0000 (0:00:01.873) 0:01:31.574 ********* 2025-05-14 14:40:10.446678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 14:40:10.446695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 14:40:10.446713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 14:40:10.446730 | orchestrator | 2025-05-14 14:40:10.446741 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-14 14:40:10.446753 | orchestrator | Wednesday 14 May 2025 14:34:30 +0000 (0:00:03.166) 0:01:34.741 ********* 2025-05-14 14:40:10.446765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 14:40:10.446776 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.446794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 14:40:10.446806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.446825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 14:40:10.446837 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.446848 | orchestrator | 2025-05-14 14:40:10.446867 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-14 14:40:10.446878 | orchestrator | Wednesday 14 May 2025 14:34:32 +0000 (0:00:01.803) 0:01:36.545 ********* 2025-05-14 14:40:10.446890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.446933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446958 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.446969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 14:40:10.446997 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447006 | orchestrator | 2025-05-14 14:40:10.447016 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-14 14:40:10.447027 | orchestrator | Wednesday 14 May 2025 14:34:34 +0000 (0:00:02.137) 0:01:38.682 ********* 2025-05-14 14:40:10.447037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447047 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447057 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447067 | orchestrator | 2025-05-14 14:40:10.447077 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-14 14:40:10.447141 | orchestrator | Wednesday 14 May 2025 14:34:35 +0000 (0:00:00.877) 0:01:39.559 ********* 2025-05-14 14:40:10.447152 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447162 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447171 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447180 | orchestrator | 2025-05-14 14:40:10.447197 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-14 14:40:10.447207 | orchestrator | Wednesday 14 May 2025 14:34:36 +0000 (0:00:01.272) 0:01:40.832 ********* 2025-05-14 14:40:10.447217 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.447227 | orchestrator | 2025-05-14 14:40:10.447236 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-14 14:40:10.447245 | orchestrator | Wednesday 14 May 2025 14:34:37 +0000 (0:00:00.940) 0:01:41.773 ********* 2025-05-14 14:40:10.447261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.447281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.447348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.447432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447480 | orchestrator | 2025-05-14 14:40:10.447490 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-14 14:40:10.447500 | orchestrator | Wednesday 14 May 2025 14:34:41 +0000 (0:00:03.860) 0:01:45.633 ********* 2025-05-14 14:40:10.447510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.447520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447569 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.447590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447635 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.447665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.447690 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447698 | orchestrator | 2025-05-14 14:40:10.447706 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-14 14:40:10.447714 | orchestrator | Wednesday 14 May 2025 14:34:42 +0000 (0:00:00.833) 0:01:46.467 ********* 2025-05-14 14:40:10.447722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447773 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 14:40:10.447797 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447805 | orchestrator | 2025-05-14 14:40:10.447817 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-14 14:40:10.447825 | orchestrator | Wednesday 14 May 2025 14:34:43 +0000 (0:00:00.817) 0:01:47.284 ********* 2025-05-14 14:40:10.447833 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.447841 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.447848 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.447856 | orchestrator | 2025-05-14 14:40:10.447864 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-14 14:40:10.447872 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:01.313) 0:01:48.597 ********* 2025-05-14 14:40:10.447880 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.447888 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.447896 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.447904 | orchestrator | 2025-05-14 14:40:10.447911 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-14 14:40:10.447919 | orchestrator | Wednesday 14 May 2025 14:34:46 +0000 (0:00:02.216) 0:01:50.814 ********* 2025-05-14 14:40:10.447927 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447934 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447942 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447950 | orchestrator | 2025-05-14 14:40:10.447958 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-14 14:40:10.447966 | orchestrator | Wednesday 14 May 2025 14:34:47 +0000 (0:00:00.297) 0:01:51.112 ********* 2025-05-14 14:40:10.447973 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.447981 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.447989 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.447997 | orchestrator | 2025-05-14 14:40:10.448004 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-14 14:40:10.448012 | orchestrator | Wednesday 14 May 2025 14:34:48 +0000 (0:00:01.036) 0:01:52.149 ********* 2025-05-14 14:40:10.448020 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.448028 | orchestrator | 2025-05-14 14:40:10.448035 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-14 14:40:10.448043 | orchestrator | Wednesday 14 May 2025 14:34:49 +0000 (0:00:01.249) 0:01:53.399 ********* 2025-05-14 14:40:10.448058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:40:10.448077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:40:10.448171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:40:10.448192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448305 | orchestrator | 2025-05-14 14:40:10.448317 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-14 14:40:10.448325 | orchestrator | Wednesday 14 May 2025 14:34:54 +0000 (0:00:04.739) 0:01:58.138 ********* 2025-05-14 14:40:10.448333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:40:10.448342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448424 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.448435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:40:10.448452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:40:10.448509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:40:10.448545 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.448554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.448604 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.448612 | orchestrator | 2025-05-14 14:40:10.448620 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-14 14:40:10.448628 | orchestrator | Wednesday 14 May 2025 14:34:54 +0000 (0:00:00.817) 0:01:58.956 ********* 2025-05-14 14:40:10.448636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448659 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.448666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448682 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.448690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 14:40:10.448705 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.448713 | orchestrator | 2025-05-14 14:40:10.448721 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-14 14:40:10.448728 | orchestrator | Wednesday 14 May 2025 14:34:55 +0000 (0:00:01.107) 0:02:00.063 ********* 2025-05-14 14:40:10.448736 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.448744 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.448752 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.448759 | orchestrator | 2025-05-14 14:40:10.448767 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-14 14:40:10.448775 | orchestrator | Wednesday 14 May 2025 14:34:57 +0000 (0:00:01.090) 0:02:01.154 ********* 2025-05-14 14:40:10.448783 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.448790 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.448798 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.448806 | orchestrator | 2025-05-14 14:40:10.448814 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-14 14:40:10.448821 | orchestrator | Wednesday 14 May 2025 14:34:59 +0000 (0:00:01.962) 0:02:03.116 ********* 2025-05-14 14:40:10.448829 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.448836 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.448844 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.448852 | orchestrator | 2025-05-14 14:40:10.448860 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-14 14:40:10.450506 | orchestrator | Wednesday 14 May 2025 14:34:59 +0000 (0:00:00.487) 0:02:03.604 ********* 2025-05-14 14:40:10.450536 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.450543 | orchestrator | 2025-05-14 14:40:10.450551 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-14 14:40:10.450558 | orchestrator | Wednesday 14 May 2025 14:35:00 +0000 (0:00:01.199) 0:02:04.803 ********* 2025-05-14 14:40:10.450572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:40:10.450589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:40:10.450622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:40:10.450651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450659 | orchestrator | 2025-05-14 14:40:10.450666 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-14 14:40:10.450673 | orchestrator | Wednesday 14 May 2025 14:35:05 +0000 (0:00:05.120) 0:02:09.924 ********* 2025-05-14 14:40:10.450686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:40:10.450700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450708 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.450721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:40:10.450732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:40:10.450744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.450772 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.450779 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.450786 | orchestrator | 2025-05-14 14:40:10.450792 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-14 14:40:10.450799 | orchestrator | Wednesday 14 May 2025 14:35:10 +0000 (0:00:04.344) 0:02:14.268 ********* 2025-05-14 14:40:10.450807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450829 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.450840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450848 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.450855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 14:40:10.450872 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.450879 | orchestrator | 2025-05-14 14:40:10.450885 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-14 14:40:10.450895 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:04.586) 0:02:18.855 ********* 2025-05-14 14:40:10.450902 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.450909 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.450915 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.450922 | orchestrator | 2025-05-14 14:40:10.450928 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-14 14:40:10.450935 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:01.145) 0:02:20.001 ********* 2025-05-14 14:40:10.450942 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.450948 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.450955 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.450961 | orchestrator | 2025-05-14 14:40:10.450968 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-14 14:40:10.450974 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:01.983) 0:02:21.985 ********* 2025-05-14 14:40:10.450981 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.450987 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.450994 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.451001 | orchestrator | 2025-05-14 14:40:10.451007 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-14 14:40:10.451015 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.462) 0:02:22.447 ********* 2025-05-14 14:40:10.451021 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.451028 | orchestrator | 2025-05-14 14:40:10.451034 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-14 14:40:10.451041 | orchestrator | Wednesday 14 May 2025 14:35:19 +0000 (0:00:00.994) 0:02:23.441 ********* 2025-05-14 14:40:10.451048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:40:10.451057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:40:10.451077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:40:10.451102 | orchestrator | 2025-05-14 14:40:10.451111 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-14 14:40:10.451118 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:03.712) 0:02:27.153 ********* 2025-05-14 14:40:10.451127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:40:10.451134 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.451146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:40:10.451154 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.451162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:40:10.451169 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.451176 | orchestrator | 2025-05-14 14:40:10.451184 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-14 14:40:10.451191 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:00.478) 0:02:27.632 ********* 2025-05-14 14:40:10.451199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.451244 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.451251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 14:40:10.451271 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.451278 | orchestrator | 2025-05-14 14:40:10.451285 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-14 14:40:10.451293 | orchestrator | Wednesday 14 May 2025 14:35:24 +0000 (0:00:00.809) 0:02:28.442 ********* 2025-05-14 14:40:10.451300 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.451307 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.451315 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.451322 | orchestrator | 2025-05-14 14:40:10.451329 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-14 14:40:10.451336 | orchestrator | Wednesday 14 May 2025 14:35:25 +0000 (0:00:01.248) 0:02:29.690 ********* 2025-05-14 14:40:10.451343 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.451351 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.451358 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.451365 | orchestrator | 2025-05-14 14:40:10.451372 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-14 14:40:10.451380 | orchestrator | Wednesday 14 May 2025 14:35:27 +0000 (0:00:01.903) 0:02:31.593 ********* 2025-05-14 14:40:10.451387 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.451394 | orchestrator | 2025-05-14 14:40:10.451401 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-14 14:40:10.451409 | orchestrator | Wednesday 14 May 2025 14:35:28 +0000 (0:00:01.251) 0:02:32.845 ********* 2025-05-14 14:40:10.451421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.451499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451506 | orchestrator | 2025-05-14 14:40:10.451517 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-14 14:40:10.451524 | orchestrator | Wednesday 14 May 2025 14:35:36 +0000 (0:00:07.661) 0:02:40.507 ********* 2025-05-14 14:40:10.451531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451561 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.451568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451594 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.451603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.451622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.451629 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.451635 | orchestrator | 2025-05-14 14:40:10.451642 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-14 14:40:10.451649 | orchestrator | Wednesday 14 May 2025 14:35:37 +0000 (0:00:00.990) 0:02:41.498 ********* 2025-05-14 14:40:10.451656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451690 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.451696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451724 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.451733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 14:40:10.451764 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.451771 | orchestrator | 2025-05-14 14:40:10.451777 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-14 14:40:10.451784 | orchestrator | Wednesday 14 May 2025 14:35:38 +0000 (0:00:01.521) 0:02:43.019 ********* 2025-05-14 14:40:10.451791 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.451797 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.451804 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.451810 | orchestrator | 2025-05-14 14:40:10.451817 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-14 14:40:10.451823 | orchestrator | Wednesday 14 May 2025 14:35:40 +0000 (0:00:01.473) 0:02:44.493 ********* 2025-05-14 14:40:10.451829 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.451836 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.451842 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.451849 | orchestrator | 2025-05-14 14:40:10.451855 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-14 14:40:10.451862 | orchestrator | Wednesday 14 May 2025 14:35:42 +0000 (0:00:02.403) 0:02:46.896 ********* 2025-05-14 14:40:10.451868 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.451875 | orchestrator | 2025-05-14 14:40:10.451882 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-14 14:40:10.451888 | orchestrator | Wednesday 14 May 2025 14:35:43 +0000 (0:00:01.067) 0:02:47.963 ********* 2025-05-14 14:40:10.451901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:40:10.451917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:40:10.451936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:40:10.451947 | orchestrator | 2025-05-14 14:40:10.451954 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-14 14:40:10.451961 | orchestrator | Wednesday 14 May 2025 14:35:48 +0000 (0:00:04.476) 0:02:52.440 ********* 2025-05-14 14:40:10.451968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:40:10.451976 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.451993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:40:10.452004 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.452012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:40:10.452019 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.452026 | orchestrator | 2025-05-14 14:40:10.453185 | orchestrator | TASK [haproxy-config : Configuring firew2025-05-14 14:40:10 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:10.453220 | orchestrator | 2025-05-14 14:40:10 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:10.453228 | orchestrator | all for horizon] *********************** 2025-05-14 14:40:10.453235 | orchestrator | Wednesday 14 May 2025 14:35:49 +0000 (0:00:01.089) 0:02:53.529 ********* 2025-05-14 14:40:10.453243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 14:40:10.453297 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.453305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 14:40:10.453341 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.453348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 14:40:10.453382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 14:40:10.453390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 14:40:10.453397 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.453404 | orchestrator | 2025-05-14 14:40:10.453411 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-14 14:40:10.453418 | orchestrator | Wednesday 14 May 2025 14:35:50 +0000 (0:00:01.354) 0:02:54.884 ********* 2025-05-14 14:40:10.453425 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.453432 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.453439 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.453446 | orchestrator | 2025-05-14 14:40:10.453453 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-14 14:40:10.453460 | orchestrator | Wednesday 14 May 2025 14:35:52 +0000 (0:00:01.469) 0:02:56.353 ********* 2025-05-14 14:40:10.453468 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.453475 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.453482 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.453489 | orchestrator | 2025-05-14 14:40:10.453496 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-14 14:40:10.453503 | orchestrator | Wednesday 14 May 2025 14:35:54 +0000 (0:00:02.241) 0:02:58.594 ********* 2025-05-14 14:40:10.453510 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.453517 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.453524 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.453531 | orchestrator | 2025-05-14 14:40:10.453538 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-14 14:40:10.453545 | orchestrator | Wednesday 14 May 2025 14:35:54 +0000 (0:00:00.496) 0:02:59.091 ********* 2025-05-14 14:40:10.453552 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.453559 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.453566 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.453573 | orchestrator | 2025-05-14 14:40:10.453580 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-14 14:40:10.453587 | orchestrator | Wednesday 14 May 2025 14:35:55 +0000 (0:00:00.334) 0:02:59.426 ********* 2025-05-14 14:40:10.453594 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.453601 | orchestrator | 2025-05-14 14:40:10.453608 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-14 14:40:10.453615 | orchestrator | Wednesday 14 May 2025 14:35:56 +0000 (0:00:01.269) 0:03:00.696 ********* 2025-05-14 14:40:10.453624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:40:10.453638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:40:10.453688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:40:10.453721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453735 | orchestrator | 2025-05-14 14:40:10.453742 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-14 14:40:10.453752 | orchestrator | Wednesday 14 May 2025 14:36:00 +0000 (0:00:04.396) 0:03:05.092 ********* 2025-05-14 14:40:10.453759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:40:10.453776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453792 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.453805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:40:10.453813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453833 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.453841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:40:10.453854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:40:10.453865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:40:10.453873 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.453880 | orchestrator | 2025-05-14 14:40:10.453888 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-14 14:40:10.453895 | orchestrator | Wednesday 14 May 2025 14:36:01 +0000 (0:00:00.766) 0:03:05.858 ********* 2025-05-14 14:40:10.453903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453928 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.453940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453947 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.453955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 14:40:10.453976 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.453983 | orchestrator | 2025-05-14 14:40:10.453991 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-14 14:40:10.453998 | orchestrator | Wednesday 14 May 2025 14:36:02 +0000 (0:00:01.097) 0:03:06.956 ********* 2025-05-14 14:40:10.454005 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.454012 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.454064 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.454072 | orchestrator | 2025-05-14 14:40:10.454079 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-14 14:40:10.454099 | orchestrator | Wednesday 14 May 2025 14:36:04 +0000 (0:00:01.351) 0:03:08.307 ********* 2025-05-14 14:40:10.454107 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.454114 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.454122 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.454129 | orchestrator | 2025-05-14 14:40:10.454136 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-14 14:40:10.454143 | orchestrator | Wednesday 14 May 2025 14:36:06 +0000 (0:00:02.329) 0:03:10.637 ********* 2025-05-14 14:40:10.454149 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.454156 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.454163 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.454169 | orchestrator | 2025-05-14 14:40:10.454176 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-14 14:40:10.454183 | orchestrator | Wednesday 14 May 2025 14:36:06 +0000 (0:00:00.298) 0:03:10.936 ********* 2025-05-14 14:40:10.454189 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.454196 | orchestrator | 2025-05-14 14:40:10.454202 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-14 14:40:10.454209 | orchestrator | Wednesday 14 May 2025 14:36:08 +0000 (0:00:01.326) 0:03:12.262 ********* 2025-05-14 14:40:10.454216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:40:10.454230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:40:10.454256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:40:10.454263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454278 | orchestrator | 2025-05-14 14:40:10.454310 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-14 14:40:10.454317 | orchestrator | Wednesday 14 May 2025 14:36:12 +0000 (0:00:04.406) 0:03:16.669 ********* 2025-05-14 14:40:10.454324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:40:10.454340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454347 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.454354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:40:10.454361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454368 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.454379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:40:10.454386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454398 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.454405 | orchestrator | 2025-05-14 14:40:10.454412 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-14 14:40:10.454421 | orchestrator | Wednesday 14 May 2025 14:36:13 +0000 (0:00:00.841) 0:03:17.510 ********* 2025-05-14 14:40:10.454428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454443 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.454449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454463 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.454470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 14:40:10.454483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.454495 | orchestrator | 2025-05-14 14:40:10.454502 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-14 14:40:10.454510 | orchestrator | Wednesday 14 May 2025 14:36:14 +0000 (0:00:01.555) 0:03:19.066 ********* 2025-05-14 14:40:10.454517 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.454523 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.454530 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.454537 | orchestrator | 2025-05-14 14:40:10.454544 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-14 14:40:10.454551 | orchestrator | Wednesday 14 May 2025 14:36:16 +0000 (0:00:01.391) 0:03:20.458 ********* 2025-05-14 14:40:10.454558 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.454565 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.454572 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.454579 | orchestrator | 2025-05-14 14:40:10.454586 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-14 14:40:10.454593 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:02.555) 0:03:23.013 ********* 2025-05-14 14:40:10.454600 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.454607 | orchestrator | 2025-05-14 14:40:10.454614 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-14 14:40:10.454621 | orchestrator | Wednesday 14 May 2025 14:36:20 +0000 (0:00:01.195) 0:03:24.208 ********* 2025-05-14 14:40:10.454637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 14:40:10.454645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 14:40:10.454679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 14:40:10.454759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454806 | orchestrator | 2025-05-14 14:40:10.454813 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-14 14:40:10.454820 | orchestrator | Wednesday 14 May 2025 14:36:25 +0000 (0:00:05.507) 0:03:29.716 ********* 2025-05-14 14:40:10.454842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 14:40:10.454850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454876 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.454883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 14:40:10.454895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454923 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.454933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 14:40:10.454941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.454967 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.454973 | orchestrator | 2025-05-14 14:40:10.454980 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-14 14:40:10.454987 | orchestrator | Wednesday 14 May 2025 14:36:26 +0000 (0:00:00.787) 0:03:30.503 ********* 2025-05-14 14:40:10.454998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455011 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455032 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 14:40:10.455055 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455062 | orchestrator | 2025-05-14 14:40:10.455068 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-14 14:40:10.455075 | orchestrator | Wednesday 14 May 2025 14:36:27 +0000 (0:00:00.958) 0:03:31.462 ********* 2025-05-14 14:40:10.455082 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.455135 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.455143 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.455149 | orchestrator | 2025-05-14 14:40:10.455156 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-14 14:40:10.455162 | orchestrator | Wednesday 14 May 2025 14:36:28 +0000 (0:00:01.269) 0:03:32.732 ********* 2025-05-14 14:40:10.455169 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.455176 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.455182 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.455189 | orchestrator | 2025-05-14 14:40:10.455195 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-14 14:40:10.455206 | orchestrator | Wednesday 14 May 2025 14:36:30 +0000 (0:00:02.125) 0:03:34.857 ********* 2025-05-14 14:40:10.455213 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.455220 | orchestrator | 2025-05-14 14:40:10.455226 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-14 14:40:10.455233 | orchestrator | Wednesday 14 May 2025 14:36:31 +0000 (0:00:01.191) 0:03:36.049 ********* 2025-05-14 14:40:10.455240 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:40:10.455247 | orchestrator | 2025-05-14 14:40:10.455254 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-14 14:40:10.455260 | orchestrator | Wednesday 14 May 2025 14:36:34 +0000 (0:00:02.910) 0:03:38.959 ********* 2025-05-14 14:40:10.455287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455304 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455335 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455372 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455379 | orchestrator | 2025-05-14 14:40:10.455386 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-14 14:40:10.455393 | orchestrator | Wednesday 14 May 2025 14:36:37 +0000 (0:00:03.121) 0:03:42.081 ********* 2025-05-14 14:40:10.455400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455421 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455451 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 14:40:10.455474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 14:40:10.455487 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455494 | orchestrator | 2025-05-14 14:40:10.455500 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-14 14:40:10.455507 | orchestrator | Wednesday 14 May 2025 14:36:41 +0000 (0:00:03.183) 0:03:45.264 ********* 2025-05-14 14:40:10.455514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455528 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455549 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 14:40:10.455583 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455589 | orchestrator | 2025-05-14 14:40:10.455596 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-14 14:40:10.455606 | orchestrator | Wednesday 14 May 2025 14:36:44 +0000 (0:00:03.354) 0:03:48.618 ********* 2025-05-14 14:40:10.455613 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.455620 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.455626 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.455633 | orchestrator | 2025-05-14 14:40:10.455640 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-14 14:40:10.455646 | orchestrator | Wednesday 14 May 2025 14:36:46 +0000 (0:00:02.342) 0:03:50.961 ********* 2025-05-14 14:40:10.455653 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455659 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455665 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455671 | orchestrator | 2025-05-14 14:40:10.455678 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-14 14:40:10.455684 | orchestrator | Wednesday 14 May 2025 14:36:48 +0000 (0:00:01.844) 0:03:52.805 ********* 2025-05-14 14:40:10.455690 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455696 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455702 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455708 | orchestrator | 2025-05-14 14:40:10.455714 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-14 14:40:10.455720 | orchestrator | Wednesday 14 May 2025 14:36:49 +0000 (0:00:00.434) 0:03:53.240 ********* 2025-05-14 14:40:10.455726 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.455733 | orchestrator | 2025-05-14 14:40:10.455739 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-14 14:40:10.455745 | orchestrator | Wednesday 14 May 2025 14:36:50 +0000 (0:00:01.203) 0:03:54.443 ********* 2025-05-14 14:40:10.455752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 14:40:10.455759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 14:40:10.455771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 14:40:10.455789 | orchestrator | 2025-05-14 14:40:10.455795 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-14 14:40:10.455802 | orchestrator | Wednesday 14 May 2025 14:36:52 +0000 (0:00:01.712) 0:03:56.155 ********* 2025-05-14 14:40:10.455812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 14:40:10.455818 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 14:40:10.455831 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 14:40:10.455844 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455850 | orchestrator | 2025-05-14 14:40:10.455857 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-14 14:40:10.455863 | orchestrator | Wednesday 14 May 2025 14:36:52 +0000 (0:00:00.368) 0:03:56.524 ********* 2025-05-14 14:40:10.455869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 14:40:10.455875 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 14:40:10.455894 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 14:40:10.455913 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455919 | orchestrator | 2025-05-14 14:40:10.455925 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-14 14:40:10.455931 | orchestrator | Wednesday 14 May 2025 14:36:53 +0000 (0:00:00.981) 0:03:57.505 ********* 2025-05-14 14:40:10.455937 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455944 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455950 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455957 | orchestrator | 2025-05-14 14:40:10.455963 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-14 14:40:10.455969 | orchestrator | Wednesday 14 May 2025 14:36:54 +0000 (0:00:00.694) 0:03:58.200 ********* 2025-05-14 14:40:10.455975 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.455981 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.455987 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.455993 | orchestrator | 2025-05-14 14:40:10.455999 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-14 14:40:10.456005 | orchestrator | Wednesday 14 May 2025 14:36:55 +0000 (0:00:01.371) 0:03:59.571 ********* 2025-05-14 14:40:10.456012 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.456018 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.456024 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.456030 | orchestrator | 2025-05-14 14:40:10.456036 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-14 14:40:10.456042 | orchestrator | Wednesday 14 May 2025 14:36:55 +0000 (0:00:00.263) 0:03:59.835 ********* 2025-05-14 14:40:10.456052 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.456058 | orchestrator | 2025-05-14 14:40:10.456064 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-14 14:40:10.456070 | orchestrator | Wednesday 14 May 2025 14:36:57 +0000 (0:00:01.308) 0:04:01.143 ********* 2025-05-14 14:40:10.456077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:40:10.456094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.456137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456189 | orchestrator | skipping: [testbed-node-0] =2025-05-14 14:40:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:10.456199 | orchestrator | > (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.456206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:40:10.456218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.456261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:40:10.456291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.456301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.456577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.456630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.456685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.456729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.456759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456772 | orchestrator | 2025-05-14 14:40:10.456778 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-14 14:40:10.456784 | orchestrator | Wednesday 14 May 2025 14:37:02 +0000 (0:00:05.136) 0:04:06.280 ********* 2025-05-14 14:40:10.456794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:40:10.456804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.456836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:40:10.456849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.456920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.456934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:40:10.456960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.456974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.456988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.456994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.457020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.457027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:40:10.457065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.457073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.457080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.457135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.457142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.457160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.457186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.457194 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.457201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.457229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.457236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457251 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.457258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.457265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:40:10.457276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:40:10.457301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:40:10.457309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457316 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.457323 | orchestrator | 2025-05-14 14:40:10.457330 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-14 14:40:10.457337 | orchestrator | Wednesday 14 May 2025 14:37:03 +0000 (0:00:01.816) 0:04:08.096 ********* 2025-05-14 14:40:10.457344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457358 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.457365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457383 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.457390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 14:40:10.457404 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.457411 | orchestrator | 2025-05-14 14:40:10.457418 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-14 14:40:10.457425 | orchestrator | Wednesday 14 May 2025 14:37:05 +0000 (0:00:01.895) 0:04:09.992 ********* 2025-05-14 14:40:10.457431 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.457438 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.457445 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.457451 | orchestrator | 2025-05-14 14:40:10.457458 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-14 14:40:10.457465 | orchestrator | Wednesday 14 May 2025 14:37:07 +0000 (0:00:01.529) 0:04:11.522 ********* 2025-05-14 14:40:10.457472 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.457479 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.457486 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.457493 | orchestrator | 2025-05-14 14:40:10.457500 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-14 14:40:10.457506 | orchestrator | Wednesday 14 May 2025 14:37:09 +0000 (0:00:02.372) 0:04:13.894 ********* 2025-05-14 14:40:10.457513 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.457519 | orchestrator | 2025-05-14 14:40:10.457529 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-14 14:40:10.457535 | orchestrator | Wednesday 14 May 2025 14:37:11 +0000 (0:00:01.560) 0:04:15.455 ********* 2025-05-14 14:40:10.457546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457570 | orchestrator | 2025-05-14 14:40:10.457577 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-14 14:40:10.457583 | orchestrator | Wednesday 14 May 2025 14:37:15 +0000 (0:00:03.821) 0:04:19.277 ********* 2025-05-14 14:40:10.457589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.457596 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.457609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.457616 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.457623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.457636 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.457643 | orchestrator | 2025-05-14 14:40:10.457649 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-14 14:40:10.457655 | orchestrator | Wednesday 14 May 2025 14:37:15 +0000 (0:00:00.787) 0:04:20.064 ********* 2025-05-14 14:40:10.457660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457672 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.457677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457688 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.457694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 14:40:10.457705 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.457710 | orchestrator | 2025-05-14 14:40:10.457716 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-14 14:40:10.457721 | orchestrator | Wednesday 14 May 2025 14:37:16 +0000 (0:00:00.989) 0:04:21.053 ********* 2025-05-14 14:40:10.457726 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.457732 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.457737 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.457742 | orchestrator | 2025-05-14 14:40:10.457748 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-14 14:40:10.457753 | orchestrator | Wednesday 14 May 2025 14:37:18 +0000 (0:00:01.456) 0:04:22.510 ********* 2025-05-14 14:40:10.457758 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.457763 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.457769 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.457774 | orchestrator | 2025-05-14 14:40:10.457782 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-14 14:40:10.457788 | orchestrator | Wednesday 14 May 2025 14:37:20 +0000 (0:00:02.485) 0:04:24.995 ********* 2025-05-14 14:40:10.457793 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.457799 | orchestrator | 2025-05-14 14:40:10.457808 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-14 14:40:10.457813 | orchestrator | Wednesday 14 May 2025 14:37:22 +0000 (0:00:01.621) 0:04:26.617 ********* 2025-05-14 14:40:10.457820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.457965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.457984 | orchestrator | 2025-05-14 14:40:10.457990 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-14 14:40:10.457996 | orchestrator | Wednesday 14 May 2025 14:37:27 +0000 (0:00:05.473) 0:04:32.091 ********* 2025-05-14 14:40:10.458009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.458048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458062 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.458074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458113 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.458126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.458138 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458143 | orchestrator | 2025-05-14 14:40:10.458149 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-14 14:40:10.458155 | orchestrator | Wednesday 14 May 2025 14:37:28 +0000 (0:00:00.849) 0:04:32.940 ********* 2025-05-14 14:40:10.458161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458192 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458223 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 14:40:10.458250 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458255 | orchestrator | 2025-05-14 14:40:10.458261 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-14 14:40:10.458266 | orchestrator | Wednesday 14 May 2025 14:37:30 +0000 (0:00:01.324) 0:04:34.265 ********* 2025-05-14 14:40:10.458271 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.458277 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.458282 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.458288 | orchestrator | 2025-05-14 14:40:10.458293 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-14 14:40:10.458298 | orchestrator | Wednesday 14 May 2025 14:37:31 +0000 (0:00:01.445) 0:04:35.710 ********* 2025-05-14 14:40:10.458304 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.458309 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.458314 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.458320 | orchestrator | 2025-05-14 14:40:10.458325 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-14 14:40:10.458330 | orchestrator | Wednesday 14 May 2025 14:37:33 +0000 (0:00:02.400) 0:04:38.111 ********* 2025-05-14 14:40:10.458336 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.458341 | orchestrator | 2025-05-14 14:40:10.458346 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-14 14:40:10.458352 | orchestrator | Wednesday 14 May 2025 14:37:35 +0000 (0:00:01.462) 0:04:39.574 ********* 2025-05-14 14:40:10.458362 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-14 14:40:10.458368 | orchestrator | 2025-05-14 14:40:10.458373 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-14 14:40:10.458378 | orchestrator | Wednesday 14 May 2025 14:37:37 +0000 (0:00:01.584) 0:04:41.158 ********* 2025-05-14 14:40:10.458384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 14:40:10.458394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 14:40:10.458404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 14:40:10.458410 | orchestrator | 2025-05-14 14:40:10.458415 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-14 14:40:10.458422 | orchestrator | Wednesday 14 May 2025 14:37:42 +0000 (0:00:05.552) 0:04:46.710 ********* 2025-05-14 14:40:10.458427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458433 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458445 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458460 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458466 | orchestrator | 2025-05-14 14:40:10.458471 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-14 14:40:10.458477 | orchestrator | Wednesday 14 May 2025 14:37:44 +0000 (0:00:01.473) 0:04:48.184 ********* 2025-05-14 14:40:10.458482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458494 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458511 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 14:40:10.458531 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458536 | orchestrator | 2025-05-14 14:40:10.458541 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 14:40:10.458551 | orchestrator | Wednesday 14 May 2025 14:37:46 +0000 (0:00:02.299) 0:04:50.483 ********* 2025-05-14 14:40:10.458558 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.458563 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.458570 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.458575 | orchestrator | 2025-05-14 14:40:10.458581 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 14:40:10.458587 | orchestrator | Wednesday 14 May 2025 14:37:49 +0000 (0:00:02.831) 0:04:53.314 ********* 2025-05-14 14:40:10.458593 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.458599 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.458605 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.458611 | orchestrator | 2025-05-14 14:40:10.458617 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-14 14:40:10.458623 | orchestrator | Wednesday 14 May 2025 14:37:52 +0000 (0:00:02.920) 0:04:56.235 ********* 2025-05-14 14:40:10.458629 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-14 14:40:10.458635 | orchestrator | 2025-05-14 14:40:10.458641 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-14 14:40:10.458648 | orchestrator | Wednesday 14 May 2025 14:37:53 +0000 (0:00:01.029) 0:04:57.264 ********* 2025-05-14 14:40:10.458654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458666 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458679 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458691 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458697 | orchestrator | 2025-05-14 14:40:10.458703 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-14 14:40:10.458709 | orchestrator | Wednesday 14 May 2025 14:37:54 +0000 (0:00:01.694) 0:04:58.959 ********* 2025-05-14 14:40:10.458716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458722 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458739 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 14:40:10.458755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458762 | orchestrator | 2025-05-14 14:40:10.458768 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-14 14:40:10.458774 | orchestrator | Wednesday 14 May 2025 14:37:56 +0000 (0:00:01.582) 0:05:00.542 ********* 2025-05-14 14:40:10.458779 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458786 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458796 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458801 | orchestrator | 2025-05-14 14:40:10.458807 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 14:40:10.458814 | orchestrator | Wednesday 14 May 2025 14:37:57 +0000 (0:00:01.508) 0:05:02.051 ********* 2025-05-14 14:40:10.458820 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.458826 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.458832 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.458838 | orchestrator | 2025-05-14 14:40:10.458844 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 14:40:10.458850 | orchestrator | Wednesday 14 May 2025 14:38:00 +0000 (0:00:02.396) 0:05:04.448 ********* 2025-05-14 14:40:10.458856 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.458862 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.458868 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.458875 | orchestrator | 2025-05-14 14:40:10.458881 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-14 14:40:10.458887 | orchestrator | Wednesday 14 May 2025 14:38:03 +0000 (0:00:03.342) 0:05:07.790 ********* 2025-05-14 14:40:10.458893 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-14 14:40:10.458899 | orchestrator | 2025-05-14 14:40:10.458905 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-14 14:40:10.458911 | orchestrator | Wednesday 14 May 2025 14:38:05 +0000 (0:00:01.387) 0:05:09.178 ********* 2025-05-14 14:40:10.458917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.458923 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.458929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.458934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.458940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.458946 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458951 | orchestrator | 2025-05-14 14:40:10.458956 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-14 14:40:10.458962 | orchestrator | Wednesday 14 May 2025 14:38:06 +0000 (0:00:01.611) 0:05:10.789 ********* 2025-05-14 14:40:10.458976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.458986 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.458991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.458997 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 14:40:10.459009 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459015 | orchestrator | 2025-05-14 14:40:10.459020 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-14 14:40:10.459025 | orchestrator | Wednesday 14 May 2025 14:38:08 +0000 (0:00:01.812) 0:05:12.602 ********* 2025-05-14 14:40:10.459031 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.459036 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459041 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459046 | orchestrator | 2025-05-14 14:40:10.459052 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 14:40:10.459057 | orchestrator | Wednesday 14 May 2025 14:38:10 +0000 (0:00:01.911) 0:05:14.514 ********* 2025-05-14 14:40:10.459062 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.459068 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.459073 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.459078 | orchestrator | 2025-05-14 14:40:10.459083 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 14:40:10.459101 | orchestrator | Wednesday 14 May 2025 14:38:13 +0000 (0:00:03.002) 0:05:17.516 ********* 2025-05-14 14:40:10.459107 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.459112 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.459118 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.459123 | orchestrator | 2025-05-14 14:40:10.459129 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-14 14:40:10.459134 | orchestrator | Wednesday 14 May 2025 14:38:17 +0000 (0:00:03.987) 0:05:21.503 ********* 2025-05-14 14:40:10.459139 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.459145 | orchestrator | 2025-05-14 14:40:10.459150 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-14 14:40:10.459155 | orchestrator | Wednesday 14 May 2025 14:38:19 +0000 (0:00:01.775) 0:05:23.278 ********* 2025-05-14 14:40:10.459161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.459176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.459210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.459251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459288 | orchestrator | 2025-05-14 14:40:10.459293 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-14 14:40:10.459299 | orchestrator | Wednesday 14 May 2025 14:38:23 +0000 (0:00:04.006) 0:05:27.285 ********* 2025-05-14 14:40:10.459304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.459311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459340 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.459349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.459355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459383 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.459466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 14:40:10.459484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 14:40:10.459496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:40:10.459506 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459512 | orchestrator | 2025-05-14 14:40:10.459517 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-14 14:40:10.459523 | orchestrator | Wednesday 14 May 2025 14:38:24 +0000 (0:00:00.912) 0:05:28.197 ********* 2025-05-14 14:40:10.459529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459541 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.459547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459558 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 14:40:10.459574 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459580 | orchestrator | 2025-05-14 14:40:10.459590 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-14 14:40:10.459596 | orchestrator | Wednesday 14 May 2025 14:38:25 +0000 (0:00:00.934) 0:05:29.132 ********* 2025-05-14 14:40:10.459601 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.459607 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.459612 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.459617 | orchestrator | 2025-05-14 14:40:10.459626 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-14 14:40:10.459632 | orchestrator | Wednesday 14 May 2025 14:38:26 +0000 (0:00:01.393) 0:05:30.525 ********* 2025-05-14 14:40:10.459637 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.459642 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.459648 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.459653 | orchestrator | 2025-05-14 14:40:10.459658 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-14 14:40:10.459663 | orchestrator | Wednesday 14 May 2025 14:38:28 +0000 (0:00:02.480) 0:05:33.005 ********* 2025-05-14 14:40:10.459669 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.459674 | orchestrator | 2025-05-14 14:40:10.459679 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-14 14:40:10.459685 | orchestrator | Wednesday 14 May 2025 14:38:30 +0000 (0:00:01.547) 0:05:34.553 ********* 2025-05-14 14:40:10.459691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:40:10.459702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:40:10.459708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:40:10.459722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:40:10.459729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:40:10.459741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:40:10.459747 | orchestrator | 2025-05-14 14:40:10.459753 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-14 14:40:10.459758 | orchestrator | Wednesday 14 May 2025 14:38:36 +0000 (0:00:06.275) 0:05:40.829 ********* 2025-05-14 14:40:10.459764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:40:10.459777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:40:10.459783 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.459789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:40:10.459799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:40:10.459805 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:40:10.459884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:40:10.459892 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459897 | orchestrator | 2025-05-14 14:40:10.459902 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-14 14:40:10.459908 | orchestrator | Wednesday 14 May 2025 14:38:37 +0000 (0:00:00.737) 0:05:41.566 ********* 2025-05-14 14:40:10.459913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 14:40:10.459924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459936 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.459942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 14:40:10.459947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459958 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.459964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 14:40:10.459969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 14:40:10.459980 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.459986 | orchestrator | 2025-05-14 14:40:10.459991 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-14 14:40:10.459996 | orchestrator | Wednesday 14 May 2025 14:38:38 +0000 (0:00:01.242) 0:05:42.809 ********* 2025-05-14 14:40:10.460002 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460007 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460012 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460017 | orchestrator | 2025-05-14 14:40:10.460023 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-14 14:40:10.460028 | orchestrator | Wednesday 14 May 2025 14:38:39 +0000 (0:00:00.741) 0:05:43.550 ********* 2025-05-14 14:40:10.460033 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460039 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460044 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460049 | orchestrator | 2025-05-14 14:40:10.460055 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-14 14:40:10.460060 | orchestrator | Wednesday 14 May 2025 14:38:41 +0000 (0:00:01.832) 0:05:45.383 ********* 2025-05-14 14:40:10.460065 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.460071 | orchestrator | 2025-05-14 14:40:10.460076 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-14 14:40:10.460081 | orchestrator | Wednesday 14 May 2025 14:38:43 +0000 (0:00:01.921) 0:05:47.304 ********* 2025-05-14 14:40:10.460120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:40:10.460132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:40:10.460162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:40:10.460216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:40:10.460269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:40:10.460297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:40:10.460361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460400 | orchestrator | 2025-05-14 14:40:10.460409 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-14 14:40:10.460414 | orchestrator | Wednesday 14 May 2025 14:38:48 +0000 (0:00:05.068) 0:05:52.372 ********* 2025-05-14 14:40:10.460423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:40:10.460429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:40:10.460469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:40:10.460508 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:40:10.460554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460590 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:40:10.460607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:40:10.460614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:40:10.460644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:40:10.460655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:40:10.460678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:40:10.460684 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460690 | orchestrator | 2025-05-14 14:40:10.460695 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-14 14:40:10.460701 | orchestrator | Wednesday 14 May 2025 14:38:49 +0000 (0:00:01.586) 0:05:53.959 ********* 2025-05-14 14:40:10.460707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460735 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460763 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 14:40:10.460779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 14:40:10.460791 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460796 | orchestrator | 2025-05-14 14:40:10.460802 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-14 14:40:10.460807 | orchestrator | Wednesday 14 May 2025 14:38:51 +0000 (0:00:01.395) 0:05:55.354 ********* 2025-05-14 14:40:10.460815 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460821 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460826 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460832 | orchestrator | 2025-05-14 14:40:10.460837 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-14 14:40:10.460846 | orchestrator | Wednesday 14 May 2025 14:38:52 +0000 (0:00:01.021) 0:05:56.375 ********* 2025-05-14 14:40:10.460851 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460857 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460862 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460867 | orchestrator | 2025-05-14 14:40:10.460873 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-14 14:40:10.460878 | orchestrator | Wednesday 14 May 2025 14:38:53 +0000 (0:00:01.731) 0:05:58.106 ********* 2025-05-14 14:40:10.460883 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.460888 | orchestrator | 2025-05-14 14:40:10.460894 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-14 14:40:10.460899 | orchestrator | Wednesday 14 May 2025 14:38:55 +0000 (0:00:01.642) 0:05:59.749 ********* 2025-05-14 14:40:10.460909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:40:10.460916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:40:10.460922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 14:40:10.460928 | orchestrator | 2025-05-14 14:40:10.460933 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-14 14:40:10.460939 | orchestrator | Wednesday 14 May 2025 14:38:58 +0000 (0:00:02.943) 0:06:02.692 ********* 2025-05-14 14:40:10.460951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 14:40:10.460962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 14:40:10.460968 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.460973 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.460979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 14:40:10.460985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.460990 | orchestrator | 2025-05-14 14:40:10.460996 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-14 14:40:10.461001 | orchestrator | Wednesday 14 May 2025 14:38:59 +0000 (0:00:00.699) 0:06:03.391 ********* 2025-05-14 14:40:10.461007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 14:40:10.461012 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 14:40:10.461023 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 14:40:10.461033 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461039 | orchestrator | 2025-05-14 14:40:10.461044 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-14 14:40:10.461049 | orchestrator | Wednesday 14 May 2025 14:39:00 +0000 (0:00:01.187) 0:06:04.579 ********* 2025-05-14 14:40:10.461055 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461060 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461065 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461071 | orchestrator | 2025-05-14 14:40:10.461076 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-14 14:40:10.461123 | orchestrator | Wednesday 14 May 2025 14:39:01 +0000 (0:00:00.749) 0:06:05.328 ********* 2025-05-14 14:40:10.461130 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461136 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461141 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461147 | orchestrator | 2025-05-14 14:40:10.461156 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-14 14:40:10.461161 | orchestrator | Wednesday 14 May 2025 14:39:03 +0000 (0:00:01.827) 0:06:07.155 ********* 2025-05-14 14:40:10.461167 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:40:10.461172 | orchestrator | 2025-05-14 14:40:10.461177 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-14 14:40:10.461183 | orchestrator | Wednesday 14 May 2025 14:39:05 +0000 (0:00:01.959) 0:06:09.114 ********* 2025-05-14 14:40:10.461188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 14:40:10.461236 | orchestrator | 2025-05-14 14:40:10.461242 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-14 14:40:10.461247 | orchestrator | Wednesday 14 May 2025 14:39:13 +0000 (0:00:08.726) 0:06:17.841 ********* 2025-05-14 14:40:10.461252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461292 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 14:40:10.461314 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461320 | orchestrator | 2025-05-14 14:40:10.461325 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-14 14:40:10.461332 | orchestrator | Wednesday 14 May 2025 14:39:14 +0000 (0:00:00.917) 0:06:18.758 ********* 2025-05-14 14:40:10.461340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461360 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461384 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 14:40:10.461412 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461416 | orchestrator | 2025-05-14 14:40:10.461421 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-14 14:40:10.461426 | orchestrator | Wednesday 14 May 2025 14:39:16 +0000 (0:00:01.780) 0:06:20.539 ********* 2025-05-14 14:40:10.461431 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.461435 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.461440 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.461445 | orchestrator | 2025-05-14 14:40:10.461449 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-14 14:40:10.461454 | orchestrator | Wednesday 14 May 2025 14:39:17 +0000 (0:00:01.514) 0:06:22.054 ********* 2025-05-14 14:40:10.461459 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.461464 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.461468 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.461473 | orchestrator | 2025-05-14 14:40:10.461478 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-14 14:40:10.461482 | orchestrator | Wednesday 14 May 2025 14:39:20 +0000 (0:00:02.474) 0:06:24.528 ********* 2025-05-14 14:40:10.461487 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461492 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461501 | orchestrator | 2025-05-14 14:40:10.461506 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-14 14:40:10.461510 | orchestrator | Wednesday 14 May 2025 14:39:20 +0000 (0:00:00.344) 0:06:24.873 ********* 2025-05-14 14:40:10.461515 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461520 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461525 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461529 | orchestrator | 2025-05-14 14:40:10.461534 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-14 14:40:10.461541 | orchestrator | Wednesday 14 May 2025 14:39:21 +0000 (0:00:00.599) 0:06:25.473 ********* 2025-05-14 14:40:10.461546 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461551 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461556 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461561 | orchestrator | 2025-05-14 14:40:10.461569 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-14 14:40:10.461574 | orchestrator | Wednesday 14 May 2025 14:39:21 +0000 (0:00:00.598) 0:06:26.071 ********* 2025-05-14 14:40:10.461579 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461588 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461593 | orchestrator | 2025-05-14 14:40:10.461598 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-14 14:40:10.461602 | orchestrator | Wednesday 14 May 2025 14:39:22 +0000 (0:00:00.320) 0:06:26.391 ********* 2025-05-14 14:40:10.461607 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461612 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461616 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461621 | orchestrator | 2025-05-14 14:40:10.461626 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-14 14:40:10.461630 | orchestrator | Wednesday 14 May 2025 14:39:22 +0000 (0:00:00.601) 0:06:26.993 ********* 2025-05-14 14:40:10.461635 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461640 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461644 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461649 | orchestrator | 2025-05-14 14:40:10.461654 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-14 14:40:10.461658 | orchestrator | Wednesday 14 May 2025 14:39:23 +0000 (0:00:01.001) 0:06:27.994 ********* 2025-05-14 14:40:10.461663 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461672 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461677 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461681 | orchestrator | 2025-05-14 14:40:10.461686 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-14 14:40:10.461691 | orchestrator | Wednesday 14 May 2025 14:39:24 +0000 (0:00:00.695) 0:06:28.690 ********* 2025-05-14 14:40:10.461696 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461700 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461705 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461710 | orchestrator | 2025-05-14 14:40:10.461714 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-14 14:40:10.461719 | orchestrator | Wednesday 14 May 2025 14:39:25 +0000 (0:00:00.628) 0:06:29.318 ********* 2025-05-14 14:40:10.461724 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461729 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461733 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461738 | orchestrator | 2025-05-14 14:40:10.461743 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-14 14:40:10.461748 | orchestrator | Wednesday 14 May 2025 14:39:26 +0000 (0:00:01.282) 0:06:30.600 ********* 2025-05-14 14:40:10.461752 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461757 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461762 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461766 | orchestrator | 2025-05-14 14:40:10.461771 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-14 14:40:10.461776 | orchestrator | Wednesday 14 May 2025 14:39:27 +0000 (0:00:01.255) 0:06:31.855 ********* 2025-05-14 14:40:10.461781 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461785 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461790 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461795 | orchestrator | 2025-05-14 14:40:10.461799 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-14 14:40:10.461804 | orchestrator | Wednesday 14 May 2025 14:39:28 +0000 (0:00:00.960) 0:06:32.816 ********* 2025-05-14 14:40:10.461809 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.461814 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.461818 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.461823 | orchestrator | 2025-05-14 14:40:10.461828 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-14 14:40:10.461832 | orchestrator | Wednesday 14 May 2025 14:39:39 +0000 (0:00:10.303) 0:06:43.120 ********* 2025-05-14 14:40:10.461837 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461842 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461846 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461851 | orchestrator | 2025-05-14 14:40:10.461856 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-14 14:40:10.461861 | orchestrator | Wednesday 14 May 2025 14:39:40 +0000 (0:00:01.300) 0:06:44.420 ********* 2025-05-14 14:40:10.461865 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.461870 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.461875 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.461880 | orchestrator | 2025-05-14 14:40:10.461884 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-14 14:40:10.461889 | orchestrator | Wednesday 14 May 2025 14:39:52 +0000 (0:00:12.031) 0:06:56.451 ********* 2025-05-14 14:40:10.461894 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.461898 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.461903 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.461908 | orchestrator | 2025-05-14 14:40:10.461913 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-14 14:40:10.461917 | orchestrator | Wednesday 14 May 2025 14:39:53 +0000 (0:00:00.759) 0:06:57.211 ********* 2025-05-14 14:40:10.461922 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:40:10.461927 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:40:10.461932 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:40:10.461941 | orchestrator | 2025-05-14 14:40:10.461946 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-14 14:40:10.461950 | orchestrator | Wednesday 14 May 2025 14:40:02 +0000 (0:00:09.604) 0:07:06.815 ********* 2025-05-14 14:40:10.461955 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461960 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461965 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.461969 | orchestrator | 2025-05-14 14:40:10.461974 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-14 14:40:10.461979 | orchestrator | Wednesday 14 May 2025 14:40:03 +0000 (0:00:00.619) 0:07:07.435 ********* 2025-05-14 14:40:10.461984 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.461991 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.461996 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.462001 | orchestrator | 2025-05-14 14:40:10.462005 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-14 14:40:10.462050 | orchestrator | Wednesday 14 May 2025 14:40:03 +0000 (0:00:00.356) 0:07:07.792 ********* 2025-05-14 14:40:10.462056 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.462061 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.462066 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.462071 | orchestrator | 2025-05-14 14:40:10.462075 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-14 14:40:10.462080 | orchestrator | Wednesday 14 May 2025 14:40:04 +0000 (0:00:00.676) 0:07:08.468 ********* 2025-05-14 14:40:10.462093 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.462098 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.462103 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.462108 | orchestrator | 2025-05-14 14:40:10.462112 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-14 14:40:10.462117 | orchestrator | Wednesday 14 May 2025 14:40:04 +0000 (0:00:00.642) 0:07:09.110 ********* 2025-05-14 14:40:10.462122 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.462127 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.462131 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.462136 | orchestrator | 2025-05-14 14:40:10.462141 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-14 14:40:10.462146 | orchestrator | Wednesday 14 May 2025 14:40:05 +0000 (0:00:00.638) 0:07:09.749 ********* 2025-05-14 14:40:10.462150 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:40:10.462155 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:40:10.462160 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:40:10.462164 | orchestrator | 2025-05-14 14:40:10.462169 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-14 14:40:10.462174 | orchestrator | Wednesday 14 May 2025 14:40:06 +0000 (0:00:00.370) 0:07:10.120 ********* 2025-05-14 14:40:10.462178 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.462183 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.462188 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.462193 | orchestrator | 2025-05-14 14:40:10.462197 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-14 14:40:10.462202 | orchestrator | Wednesday 14 May 2025 14:40:07 +0000 (0:00:01.289) 0:07:11.410 ********* 2025-05-14 14:40:10.462207 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:40:10.462211 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:40:10.462217 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:40:10.462225 | orchestrator | 2025-05-14 14:40:10.462234 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:40:10.462242 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 14:40:10.462251 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 14:40:10.462270 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 14:40:10.462278 | orchestrator | 2025-05-14 14:40:10.462286 | orchestrator | 2025-05-14 14:40:10.462293 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:40:10.462301 | orchestrator | Wednesday 14 May 2025 14:40:08 +0000 (0:00:01.171) 0:07:12.581 ********* 2025-05-14 14:40:10.462309 | orchestrator | =============================================================================== 2025-05-14 14:40:10.462317 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.03s 2025-05-14 14:40:10.462324 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.30s 2025-05-14 14:40:10.462331 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.60s 2025-05-14 14:40:10.462338 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.73s 2025-05-14 14:40:10.462346 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.66s 2025-05-14 14:40:10.462353 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.28s 2025-05-14 14:40:10.462361 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.66s 2025-05-14 14:40:10.462368 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.55s 2025-05-14 14:40:10.462376 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.51s 2025-05-14 14:40:10.462383 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.47s 2025-05-14 14:40:10.462391 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.14s 2025-05-14 14:40:10.462398 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.12s 2025-05-14 14:40:10.462406 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.07s 2025-05-14 14:40:10.462413 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.74s 2025-05-14 14:40:10.462422 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.59s 2025-05-14 14:40:10.462430 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.48s 2025-05-14 14:40:10.462437 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.47s 2025-05-14 14:40:10.462445 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.41s 2025-05-14 14:40:10.462454 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.40s 2025-05-14 14:40:10.462462 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.34s 2025-05-14 14:40:13.509464 | orchestrator | 2025-05-14 14:40:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:13.514339 | orchestrator | 2025-05-14 14:40:13 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:13.515282 | orchestrator | 2025-05-14 14:40:13 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:13.517360 | orchestrator | 2025-05-14 14:40:13 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:13.518680 | orchestrator | 2025-05-14 14:40:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:16.576626 | orchestrator | 2025-05-14 14:40:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:16.578129 | orchestrator | 2025-05-14 14:40:16 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:16.580467 | orchestrator | 2025-05-14 14:40:16 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:16.582336 | orchestrator | 2025-05-14 14:40:16 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:16.582398 | orchestrator | 2025-05-14 14:40:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:19.650916 | orchestrator | 2025-05-14 14:40:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:19.651246 | orchestrator | 2025-05-14 14:40:19 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:19.652105 | orchestrator | 2025-05-14 14:40:19 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:19.652876 | orchestrator | 2025-05-14 14:40:19 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:19.652898 | orchestrator | 2025-05-14 14:40:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:22.689025 | orchestrator | 2025-05-14 14:40:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:22.689117 | orchestrator | 2025-05-14 14:40:22 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:22.689574 | orchestrator | 2025-05-14 14:40:22 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:22.690473 | orchestrator | 2025-05-14 14:40:22 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:22.690509 | orchestrator | 2025-05-14 14:40:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:25.731533 | orchestrator | 2025-05-14 14:40:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:25.732321 | orchestrator | 2025-05-14 14:40:25 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:25.732763 | orchestrator | 2025-05-14 14:40:25 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:25.737080 | orchestrator | 2025-05-14 14:40:25 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:25.737124 | orchestrator | 2025-05-14 14:40:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:28.773996 | orchestrator | 2025-05-14 14:40:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:28.774174 | orchestrator | 2025-05-14 14:40:28 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:28.776169 | orchestrator | 2025-05-14 14:40:28 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:28.776612 | orchestrator | 2025-05-14 14:40:28 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:28.776675 | orchestrator | 2025-05-14 14:40:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:31.826385 | orchestrator | 2025-05-14 14:40:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:31.826713 | orchestrator | 2025-05-14 14:40:31 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:31.827537 | orchestrator | 2025-05-14 14:40:31 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:31.831653 | orchestrator | 2025-05-14 14:40:31 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:31.831698 | orchestrator | 2025-05-14 14:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:34.862817 | orchestrator | 2025-05-14 14:40:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:34.862968 | orchestrator | 2025-05-14 14:40:34 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:34.863556 | orchestrator | 2025-05-14 14:40:34 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:34.866087 | orchestrator | 2025-05-14 14:40:34 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:34.866142 | orchestrator | 2025-05-14 14:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:37.911885 | orchestrator | 2025-05-14 14:40:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:37.912019 | orchestrator | 2025-05-14 14:40:37 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:37.912731 | orchestrator | 2025-05-14 14:40:37 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:37.913382 | orchestrator | 2025-05-14 14:40:37 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:37.913620 | orchestrator | 2025-05-14 14:40:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:40.967158 | orchestrator | 2025-05-14 14:40:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:40.967274 | orchestrator | 2025-05-14 14:40:40 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:40.969442 | orchestrator | 2025-05-14 14:40:40 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:40.972585 | orchestrator | 2025-05-14 14:40:40 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:40.972631 | orchestrator | 2025-05-14 14:40:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:44.033036 | orchestrator | 2025-05-14 14:40:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:44.039163 | orchestrator | 2025-05-14 14:40:44 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:44.079982 | orchestrator | 2025-05-14 14:40:44 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:44.080086 | orchestrator | 2025-05-14 14:40:44 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:44.080101 | orchestrator | 2025-05-14 14:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:47.096800 | orchestrator | 2025-05-14 14:40:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:47.102598 | orchestrator | 2025-05-14 14:40:47 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:47.104547 | orchestrator | 2025-05-14 14:40:47 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:47.106451 | orchestrator | 2025-05-14 14:40:47 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:47.106786 | orchestrator | 2025-05-14 14:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:50.153754 | orchestrator | 2025-05-14 14:40:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:50.154290 | orchestrator | 2025-05-14 14:40:50 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:50.155962 | orchestrator | 2025-05-14 14:40:50 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:50.157591 | orchestrator | 2025-05-14 14:40:50 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:50.157809 | orchestrator | 2025-05-14 14:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:53.198122 | orchestrator | 2025-05-14 14:40:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:53.199513 | orchestrator | 2025-05-14 14:40:53 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:53.199600 | orchestrator | 2025-05-14 14:40:53 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:53.199626 | orchestrator | 2025-05-14 14:40:53 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:53.199861 | orchestrator | 2025-05-14 14:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:56.243485 | orchestrator | 2025-05-14 14:40:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:56.244891 | orchestrator | 2025-05-14 14:40:56 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:56.247935 | orchestrator | 2025-05-14 14:40:56 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:56.249008 | orchestrator | 2025-05-14 14:40:56 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:56.249039 | orchestrator | 2025-05-14 14:40:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:40:59.293119 | orchestrator | 2025-05-14 14:40:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:40:59.293928 | orchestrator | 2025-05-14 14:40:59 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:40:59.296063 | orchestrator | 2025-05-14 14:40:59 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:40:59.297043 | orchestrator | 2025-05-14 14:40:59 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:40:59.297076 | orchestrator | 2025-05-14 14:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:02.346174 | orchestrator | 2025-05-14 14:41:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:02.346346 | orchestrator | 2025-05-14 14:41:02 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:02.347527 | orchestrator | 2025-05-14 14:41:02 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:02.349006 | orchestrator | 2025-05-14 14:41:02 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:02.349278 | orchestrator | 2025-05-14 14:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:05.413182 | orchestrator | 2025-05-14 14:41:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:05.414505 | orchestrator | 2025-05-14 14:41:05 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:05.415556 | orchestrator | 2025-05-14 14:41:05 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:05.416713 | orchestrator | 2025-05-14 14:41:05 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:05.416744 | orchestrator | 2025-05-14 14:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:08.457744 | orchestrator | 2025-05-14 14:41:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:08.459376 | orchestrator | 2025-05-14 14:41:08 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:08.460618 | orchestrator | 2025-05-14 14:41:08 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:08.461322 | orchestrator | 2025-05-14 14:41:08 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:08.462071 | orchestrator | 2025-05-14 14:41:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:11.503688 | orchestrator | 2025-05-14 14:41:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:11.504767 | orchestrator | 2025-05-14 14:41:11 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:11.508052 | orchestrator | 2025-05-14 14:41:11 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:11.509777 | orchestrator | 2025-05-14 14:41:11 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:11.509807 | orchestrator | 2025-05-14 14:41:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:14.593842 | orchestrator | 2025-05-14 14:41:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:14.596989 | orchestrator | 2025-05-14 14:41:14 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:14.600774 | orchestrator | 2025-05-14 14:41:14 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:14.604011 | orchestrator | 2025-05-14 14:41:14 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:14.604202 | orchestrator | 2025-05-14 14:41:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:17.656440 | orchestrator | 2025-05-14 14:41:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:17.656535 | orchestrator | 2025-05-14 14:41:17 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:17.656998 | orchestrator | 2025-05-14 14:41:17 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:17.658448 | orchestrator | 2025-05-14 14:41:17 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:17.658502 | orchestrator | 2025-05-14 14:41:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:20.714916 | orchestrator | 2025-05-14 14:41:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:20.715191 | orchestrator | 2025-05-14 14:41:20 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:20.716117 | orchestrator | 2025-05-14 14:41:20 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:20.716974 | orchestrator | 2025-05-14 14:41:20 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:20.717031 | orchestrator | 2025-05-14 14:41:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:23.768870 | orchestrator | 2025-05-14 14:41:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:23.769349 | orchestrator | 2025-05-14 14:41:23 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:23.771091 | orchestrator | 2025-05-14 14:41:23 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:23.773136 | orchestrator | 2025-05-14 14:41:23 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:23.773175 | orchestrator | 2025-05-14 14:41:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:26.825262 | orchestrator | 2025-05-14 14:41:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:26.831146 | orchestrator | 2025-05-14 14:41:26 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:26.831955 | orchestrator | 2025-05-14 14:41:26 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:26.834823 | orchestrator | 2025-05-14 14:41:26 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:26.834950 | orchestrator | 2025-05-14 14:41:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:29.881257 | orchestrator | 2025-05-14 14:41:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:29.882773 | orchestrator | 2025-05-14 14:41:29 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:29.884551 | orchestrator | 2025-05-14 14:41:29 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:29.885917 | orchestrator | 2025-05-14 14:41:29 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:29.885989 | orchestrator | 2025-05-14 14:41:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:32.941051 | orchestrator | 2025-05-14 14:41:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:32.941173 | orchestrator | 2025-05-14 14:41:32 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:32.941702 | orchestrator | 2025-05-14 14:41:32 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:32.944229 | orchestrator | 2025-05-14 14:41:32 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:32.944278 | orchestrator | 2025-05-14 14:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:35.993614 | orchestrator | 2025-05-14 14:41:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:35.993821 | orchestrator | 2025-05-14 14:41:35 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:35.994803 | orchestrator | 2025-05-14 14:41:35 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:35.996586 | orchestrator | 2025-05-14 14:41:35 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:35.996633 | orchestrator | 2025-05-14 14:41:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:39.047983 | orchestrator | 2025-05-14 14:41:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:39.048295 | orchestrator | 2025-05-14 14:41:39 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:39.049332 | orchestrator | 2025-05-14 14:41:39 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:39.050265 | orchestrator | 2025-05-14 14:41:39 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:39.050466 | orchestrator | 2025-05-14 14:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:42.092922 | orchestrator | 2025-05-14 14:41:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:42.093230 | orchestrator | 2025-05-14 14:41:42 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:42.095313 | orchestrator | 2025-05-14 14:41:42 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:42.095380 | orchestrator | 2025-05-14 14:41:42 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:42.095401 | orchestrator | 2025-05-14 14:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:45.156077 | orchestrator | 2025-05-14 14:41:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:45.156800 | orchestrator | 2025-05-14 14:41:45 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:45.158765 | orchestrator | 2025-05-14 14:41:45 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:45.160655 | orchestrator | 2025-05-14 14:41:45 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:45.160707 | orchestrator | 2025-05-14 14:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:48.213514 | orchestrator | 2025-05-14 14:41:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:48.215974 | orchestrator | 2025-05-14 14:41:48 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:48.218250 | orchestrator | 2025-05-14 14:41:48 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:48.221235 | orchestrator | 2025-05-14 14:41:48 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:48.221275 | orchestrator | 2025-05-14 14:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:51.270519 | orchestrator | 2025-05-14 14:41:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:51.271758 | orchestrator | 2025-05-14 14:41:51 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:51.273797 | orchestrator | 2025-05-14 14:41:51 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:51.275146 | orchestrator | 2025-05-14 14:41:51 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:51.275186 | orchestrator | 2025-05-14 14:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:54.334293 | orchestrator | 2025-05-14 14:41:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:54.335868 | orchestrator | 2025-05-14 14:41:54 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:54.338233 | orchestrator | 2025-05-14 14:41:54 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:54.339456 | orchestrator | 2025-05-14 14:41:54 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:54.339498 | orchestrator | 2025-05-14 14:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:41:57.398878 | orchestrator | 2025-05-14 14:41:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:41:57.401643 | orchestrator | 2025-05-14 14:41:57 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:41:57.403227 | orchestrator | 2025-05-14 14:41:57 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:41:57.405262 | orchestrator | 2025-05-14 14:41:57 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:41:57.405724 | orchestrator | 2025-05-14 14:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:00.469852 | orchestrator | 2025-05-14 14:42:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:00.469961 | orchestrator | 2025-05-14 14:42:00 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:00.472275 | orchestrator | 2025-05-14 14:42:00 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:00.472966 | orchestrator | 2025-05-14 14:42:00 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:00.473042 | orchestrator | 2025-05-14 14:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:03.524827 | orchestrator | 2025-05-14 14:42:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:03.527597 | orchestrator | 2025-05-14 14:42:03 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:03.530525 | orchestrator | 2025-05-14 14:42:03 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:03.532762 | orchestrator | 2025-05-14 14:42:03 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:03.532926 | orchestrator | 2025-05-14 14:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:06.592692 | orchestrator | 2025-05-14 14:42:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:06.593018 | orchestrator | 2025-05-14 14:42:06 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:06.594634 | orchestrator | 2025-05-14 14:42:06 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:06.596833 | orchestrator | 2025-05-14 14:42:06 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:06.597382 | orchestrator | 2025-05-14 14:42:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:09.657406 | orchestrator | 2025-05-14 14:42:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:09.660719 | orchestrator | 2025-05-14 14:42:09 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:09.664243 | orchestrator | 2025-05-14 14:42:09 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:09.667181 | orchestrator | 2025-05-14 14:42:09 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:09.667332 | orchestrator | 2025-05-14 14:42:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:12.723008 | orchestrator | 2025-05-14 14:42:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:12.723119 | orchestrator | 2025-05-14 14:42:12 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:12.723741 | orchestrator | 2025-05-14 14:42:12 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:12.725183 | orchestrator | 2025-05-14 14:42:12 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:12.725225 | orchestrator | 2025-05-14 14:42:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:15.779330 | orchestrator | 2025-05-14 14:42:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:15.780468 | orchestrator | 2025-05-14 14:42:15 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state STARTED 2025-05-14 14:42:15.782520 | orchestrator | 2025-05-14 14:42:15 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:15.784166 | orchestrator | 2025-05-14 14:42:15 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:15.784191 | orchestrator | 2025-05-14 14:42:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:18.840840 | orchestrator | 2025-05-14 14:42:18.840950 | orchestrator | 2025-05-14 14:42:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:18.840967 | orchestrator | 2025-05-14 14:42:18 | INFO  | Task a69dc821-ba55-4ce6-a9d7-ffb571992283 is in state SUCCESS 2025-05-14 14:42:18.843210 | orchestrator | 2025-05-14 14:42:18.843250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:42:18.843289 | orchestrator | 2025-05-14 14:42:18.843301 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:42:18.843313 | orchestrator | Wednesday 14 May 2025 14:40:12 +0000 (0:00:00.360) 0:00:00.360 ********* 2025-05-14 14:42:18.843324 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:42:18.843336 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:42:18.843347 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:42:18.843358 | orchestrator | 2025-05-14 14:42:18.843369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:42:18.843380 | orchestrator | Wednesday 14 May 2025 14:40:13 +0000 (0:00:00.438) 0:00:00.798 ********* 2025-05-14 14:42:18.843391 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-14 14:42:18.843402 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-14 14:42:18.843413 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-14 14:42:18.843423 | orchestrator | 2025-05-14 14:42:18.843434 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-14 14:42:18.843474 | orchestrator | 2025-05-14 14:42:18.843486 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 14:42:18.843497 | orchestrator | Wednesday 14 May 2025 14:40:13 +0000 (0:00:00.304) 0:00:01.102 ********* 2025-05-14 14:42:18.843508 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:42:18.843519 | orchestrator | 2025-05-14 14:42:18.843530 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-14 14:42:18.843541 | orchestrator | Wednesday 14 May 2025 14:40:14 +0000 (0:00:00.711) 0:00:01.813 ********* 2025-05-14 14:42:18.843552 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:42:18.843563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:42:18.843574 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 14:42:18.843584 | orchestrator | 2025-05-14 14:42:18.843595 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-14 14:42:18.843606 | orchestrator | Wednesday 14 May 2025 14:40:14 +0000 (0:00:00.810) 0:00:02.624 ********* 2025-05-14 14:42:18.843713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.843816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.843832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.843852 | orchestrator | 2025-05-14 14:42:18.843865 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 14:42:18.843877 | orchestrator | Wednesday 14 May 2025 14:40:16 +0000 (0:00:01.789) 0:00:04.414 ********* 2025-05-14 14:42:18.843889 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:42:18.843901 | orchestrator | 2025-05-14 14:42:18.843914 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-14 14:42:18.843926 | orchestrator | Wednesday 14 May 2025 14:40:17 +0000 (0:00:00.771) 0:00:05.185 ********* 2025-05-14 14:42:18.843948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.843989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844047 | orchestrator | 2025-05-14 14:42:18.844058 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-14 14:42:18.844070 | orchestrator | Wednesday 14 May 2025 14:40:21 +0000 (0:00:03.619) 0:00:08.804 ********* 2025-05-14 14:42:18.844081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:42:18.844130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844159 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:42:18.844171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844201 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:42:18.844213 | orchestrator | 2025-05-14 14:42:18.844224 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-14 14:42:18.844235 | orchestrator | Wednesday 14 May 2025 14:40:21 +0000 (0:00:00.827) 0:00:09.632 ********* 2025-05-14 14:42:18.844252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844282 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:42:18.844293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844324 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:42:18.844340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 14:42:18.844358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 14:42:18.844370 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:42:18.844381 | orchestrator | 2025-05-14 14:42:18.844392 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-14 14:42:18.844403 | orchestrator | Wednesday 14 May 2025 14:40:23 +0000 (0:00:01.218) 0:00:10.850 ********* 2025-05-14 14:42:18.844414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844554 | orchestrator | 2025-05-14 14:42:18.844566 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-14 14:42:18.844577 | orchestrator | Wednesday 14 May 2025 14:40:25 +0000 (0:00:02.382) 0:00:13.232 ********* 2025-05-14 14:42:18.844588 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.844599 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:42:18.844610 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:42:18.844621 | orchestrator | 2025-05-14 14:42:18.844632 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-14 14:42:18.844643 | orchestrator | Wednesday 14 May 2025 14:40:28 +0000 (0:00:03.079) 0:00:16.312 ********* 2025-05-14 14:42:18.844654 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.844664 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:42:18.844675 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:42:18.844686 | orchestrator | 2025-05-14 14:42:18.844697 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-14 14:42:18.844708 | orchestrator | Wednesday 14 May 2025 14:40:30 +0000 (0:00:01.625) 0:00:17.937 ********* 2025-05-14 14:42:18.844729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 14:42:18.844777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 14:42:18.844836 | orchestrator | 2025-05-14 14:42:18.844847 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 14:42:18.844858 | orchestrator | Wednesday 14 May 2025 14:40:32 +0000 (0:00:02.760) 0:00:20.698 ********* 2025-05-14 14:42:18.844869 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:42:18.844879 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:42:18.844890 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:42:18.844900 | orchestrator | 2025-05-14 14:42:18.844911 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 14:42:18.844922 | orchestrator | Wednesday 14 May 2025 14:40:33 +0000 (0:00:00.298) 0:00:20.997 ********* 2025-05-14 14:42:18.844932 | orchestrator | 2025-05-14 14:42:18.844943 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 14:42:18.844953 | orchestrator | Wednesday 14 May 2025 14:40:33 +0000 (0:00:00.203) 0:00:21.200 ********* 2025-05-14 14:42:18.844964 | orchestrator | 2025-05-14 14:42:18.844975 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 14:42:18.844985 | orchestrator | Wednesday 14 May 2025 14:40:33 +0000 (0:00:00.056) 0:00:21.257 ********* 2025-05-14 14:42:18.844996 | orchestrator | 2025-05-14 14:42:18.845064 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-14 14:42:18.845076 | orchestrator | Wednesday 14 May 2025 14:40:33 +0000 (0:00:00.059) 0:00:21.316 ********* 2025-05-14 14:42:18.845087 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:42:18.845098 | orchestrator | 2025-05-14 14:42:18.845109 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-14 14:42:18.845120 | orchestrator | Wednesday 14 May 2025 14:40:33 +0000 (0:00:00.186) 0:00:21.502 ********* 2025-05-14 14:42:18.845131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:42:18.845142 | orchestrator | 2025-05-14 14:42:18.845152 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-14 14:42:18.845163 | orchestrator | Wednesday 14 May 2025 14:40:34 +0000 (0:00:00.619) 0:00:22.122 ********* 2025-05-14 14:42:18.845174 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.845185 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:42:18.845196 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:42:18.845207 | orchestrator | 2025-05-14 14:42:18.845218 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-14 14:42:18.845229 | orchestrator | Wednesday 14 May 2025 14:41:06 +0000 (0:00:31.751) 0:00:53.873 ********* 2025-05-14 14:42:18.845240 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.845251 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:42:18.845262 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:42:18.845272 | orchestrator | 2025-05-14 14:42:18.845283 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 14:42:18.845294 | orchestrator | Wednesday 14 May 2025 14:42:03 +0000 (0:00:57.825) 0:01:51.699 ********* 2025-05-14 14:42:18.845305 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:42:18.845316 | orchestrator | 2025-05-14 14:42:18.845327 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-14 14:42:18.845337 | orchestrator | Wednesday 14 May 2025 14:42:04 +0000 (0:00:00.773) 0:01:52.473 ********* 2025-05-14 14:42:18.845348 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:42:18.845359 | orchestrator | 2025-05-14 14:42:18.845370 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-14 14:42:18.845381 | orchestrator | Wednesday 14 May 2025 14:42:07 +0000 (0:00:02.882) 0:01:55.356 ********* 2025-05-14 14:42:18.845391 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:42:18.845402 | orchestrator | 2025-05-14 14:42:18.845413 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-14 14:42:18.845424 | orchestrator | Wednesday 14 May 2025 14:42:10 +0000 (0:00:02.776) 0:01:58.132 ********* 2025-05-14 14:42:18.845435 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.845494 | orchestrator | 2025-05-14 14:42:18.845507 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-14 14:42:18.845518 | orchestrator | Wednesday 14 May 2025 14:42:13 +0000 (0:00:03.004) 0:02:01.137 ********* 2025-05-14 14:42:18.845529 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:42:18.845540 | orchestrator | 2025-05-14 14:42:18.845558 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:42:18.845571 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:42:18.845584 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:42:18.845595 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 14:42:18.845605 | orchestrator | 2025-05-14 14:42:18.845616 | orchestrator | 2025-05-14 14:42:18.845627 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:42:18.845638 | orchestrator | Wednesday 14 May 2025 14:42:16 +0000 (0:00:02.949) 0:02:04.086 ********* 2025-05-14 14:42:18.845649 | orchestrator | =============================================================================== 2025-05-14 14:42:18.845660 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 57.83s 2025-05-14 14:42:18.845677 | orchestrator | opensearch : Restart opensearch container ------------------------------ 31.75s 2025-05-14 14:42:18.845688 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.62s 2025-05-14 14:42:18.845699 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.08s 2025-05-14 14:42:18.845709 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.00s 2025-05-14 14:42:18.845720 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.95s 2025-05-14 14:42:18.845731 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.88s 2025-05-14 14:42:18.845742 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.78s 2025-05-14 14:42:18.845752 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.76s 2025-05-14 14:42:18.845763 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2025-05-14 14:42:18.845774 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.79s 2025-05-14 14:42:18.845785 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.63s 2025-05-14 14:42:18.845796 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.22s 2025-05-14 14:42:18.845806 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.83s 2025-05-14 14:42:18.845817 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.81s 2025-05-14 14:42:18.845828 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2025-05-14 14:42:18.845839 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2025-05-14 14:42:18.845850 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-14 14:42:18.845861 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.62s 2025-05-14 14:42:18.845872 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-14 14:42:18.845976 | orchestrator | 2025-05-14 14:42:18 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:18.846160 | orchestrator | 2025-05-14 14:42:18 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:18.846182 | orchestrator | 2025-05-14 14:42:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:21.902598 | orchestrator | 2025-05-14 14:42:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:21.905055 | orchestrator | 2025-05-14 14:42:21 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:21.906869 | orchestrator | 2025-05-14 14:42:21 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:21.906901 | orchestrator | 2025-05-14 14:42:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:24.966383 | orchestrator | 2025-05-14 14:42:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:24.966656 | orchestrator | 2025-05-14 14:42:24 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:24.970259 | orchestrator | 2025-05-14 14:42:24 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:24.970365 | orchestrator | 2025-05-14 14:42:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:28.038228 | orchestrator | 2025-05-14 14:42:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:28.039902 | orchestrator | 2025-05-14 14:42:28 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:28.042143 | orchestrator | 2025-05-14 14:42:28 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:28.042165 | orchestrator | 2025-05-14 14:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:31.096673 | orchestrator | 2025-05-14 14:42:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:31.097686 | orchestrator | 2025-05-14 14:42:31 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:31.099592 | orchestrator | 2025-05-14 14:42:31 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:31.099938 | orchestrator | 2025-05-14 14:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:34.167049 | orchestrator | 2025-05-14 14:42:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:34.170167 | orchestrator | 2025-05-14 14:42:34 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:34.173810 | orchestrator | 2025-05-14 14:42:34 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:34.173855 | orchestrator | 2025-05-14 14:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:37.233315 | orchestrator | 2025-05-14 14:42:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:37.233426 | orchestrator | 2025-05-14 14:42:37 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:37.234941 | orchestrator | 2025-05-14 14:42:37 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:37.234973 | orchestrator | 2025-05-14 14:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:40.294958 | orchestrator | 2025-05-14 14:42:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:40.296820 | orchestrator | 2025-05-14 14:42:40 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:40.297963 | orchestrator | 2025-05-14 14:42:40 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:40.298144 | orchestrator | 2025-05-14 14:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:43.364546 | orchestrator | 2025-05-14 14:42:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:43.371295 | orchestrator | 2025-05-14 14:42:43 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:43.371378 | orchestrator | 2025-05-14 14:42:43 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:43.371842 | orchestrator | 2025-05-14 14:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:46.432003 | orchestrator | 2025-05-14 14:42:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:46.434180 | orchestrator | 2025-05-14 14:42:46 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:46.434967 | orchestrator | 2025-05-14 14:42:46 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:46.435148 | orchestrator | 2025-05-14 14:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:49.482953 | orchestrator | 2025-05-14 14:42:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:49.484729 | orchestrator | 2025-05-14 14:42:49 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:49.487223 | orchestrator | 2025-05-14 14:42:49 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:49.487269 | orchestrator | 2025-05-14 14:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:52.540803 | orchestrator | 2025-05-14 14:42:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:52.541600 | orchestrator | 2025-05-14 14:42:52 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:52.542826 | orchestrator | 2025-05-14 14:42:52 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:52.543148 | orchestrator | 2025-05-14 14:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:55.597918 | orchestrator | 2025-05-14 14:42:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:55.598334 | orchestrator | 2025-05-14 14:42:55 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:55.599321 | orchestrator | 2025-05-14 14:42:55 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:55.599355 | orchestrator | 2025-05-14 14:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:42:58.650257 | orchestrator | 2025-05-14 14:42:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:42:58.650381 | orchestrator | 2025-05-14 14:42:58 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:42:58.652346 | orchestrator | 2025-05-14 14:42:58 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:42:58.652460 | orchestrator | 2025-05-14 14:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:01.702670 | orchestrator | 2025-05-14 14:43:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:01.704379 | orchestrator | 2025-05-14 14:43:01 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:01.706928 | orchestrator | 2025-05-14 14:43:01 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:01.706971 | orchestrator | 2025-05-14 14:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:04.775672 | orchestrator | 2025-05-14 14:43:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:04.775890 | orchestrator | 2025-05-14 14:43:04 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:04.777127 | orchestrator | 2025-05-14 14:43:04 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:04.777176 | orchestrator | 2025-05-14 14:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:07.842946 | orchestrator | 2025-05-14 14:43:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:07.843345 | orchestrator | 2025-05-14 14:43:07 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:07.845380 | orchestrator | 2025-05-14 14:43:07 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:07.845419 | orchestrator | 2025-05-14 14:43:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:10.906158 | orchestrator | 2025-05-14 14:43:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:10.908807 | orchestrator | 2025-05-14 14:43:10 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:10.908991 | orchestrator | 2025-05-14 14:43:10 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:10.909478 | orchestrator | 2025-05-14 14:43:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:13.971854 | orchestrator | 2025-05-14 14:43:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:13.975026 | orchestrator | 2025-05-14 14:43:13 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:13.976720 | orchestrator | 2025-05-14 14:43:13 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:13.977038 | orchestrator | 2025-05-14 14:43:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:17.028093 | orchestrator | 2025-05-14 14:43:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:17.033756 | orchestrator | 2025-05-14 14:43:17 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:17.034926 | orchestrator | 2025-05-14 14:43:17 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:17.035079 | orchestrator | 2025-05-14 14:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:20.104416 | orchestrator | 2025-05-14 14:43:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:20.107193 | orchestrator | 2025-05-14 14:43:20 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:20.109377 | orchestrator | 2025-05-14 14:43:20 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:20.109795 | orchestrator | 2025-05-14 14:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:23.162721 | orchestrator | 2025-05-14 14:43:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:23.163898 | orchestrator | 2025-05-14 14:43:23 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:23.165064 | orchestrator | 2025-05-14 14:43:23 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:23.165098 | orchestrator | 2025-05-14 14:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:26.216132 | orchestrator | 2025-05-14 14:43:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:26.217597 | orchestrator | 2025-05-14 14:43:26 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:26.220120 | orchestrator | 2025-05-14 14:43:26 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state STARTED 2025-05-14 14:43:26.220642 | orchestrator | 2025-05-14 14:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:29.281172 | orchestrator | 2025-05-14 14:43:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:29.281460 | orchestrator | 2025-05-14 14:43:29 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:29.288006 | orchestrator | 2025-05-14 14:43:29 | INFO  | Task 121917d2-6844-4b08-81d5-da99b976bbe1 is in state SUCCESS 2025-05-14 14:43:29.289911 | orchestrator | 2025-05-14 14:43:29.289952 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:43:29.289965 | orchestrator | 2025-05-14 14:43:29.290083 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-14 14:43:29.290101 | orchestrator | 2025-05-14 14:43:29.290113 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 14:43:29.290125 | orchestrator | Wednesday 14 May 2025 14:30:35 +0000 (0:00:01.543) 0:00:01.544 ********* 2025-05-14 14:43:29.290142 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.290156 | orchestrator | 2025-05-14 14:43:29.290167 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 14:43:29.290178 | orchestrator | Wednesday 14 May 2025 14:30:36 +0000 (0:00:01.249) 0:00:02.793 ********* 2025-05-14 14:43:29.290189 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.290201 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 14:43:29.290211 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 14:43:29.290287 | orchestrator | 2025-05-14 14:43:29.290300 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 14:43:29.290311 | orchestrator | Wednesday 14 May 2025 14:30:36 +0000 (0:00:00.672) 0:00:03.465 ********* 2025-05-14 14:43:29.290324 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.290336 | orchestrator | 2025-05-14 14:43:29.290347 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 14:43:29.290358 | orchestrator | Wednesday 14 May 2025 14:30:38 +0000 (0:00:01.237) 0:00:04.703 ********* 2025-05-14 14:43:29.290369 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.290379 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.290390 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.290404 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.290421 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.290440 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.290459 | orchestrator | 2025-05-14 14:43:29.290479 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 14:43:29.290500 | orchestrator | Wednesday 14 May 2025 14:30:39 +0000 (0:00:01.292) 0:00:05.995 ********* 2025-05-14 14:43:29.290521 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.290540 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.290559 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.290579 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.290599 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.290697 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.290716 | orchestrator | 2025-05-14 14:43:29.290736 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 14:43:29.290749 | orchestrator | Wednesday 14 May 2025 14:30:40 +0000 (0:00:00.930) 0:00:06.926 ********* 2025-05-14 14:43:29.290761 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.290830 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.290845 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.290857 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.290894 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.290906 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.290916 | orchestrator | 2025-05-14 14:43:29.290927 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 14:43:29.290938 | orchestrator | Wednesday 14 May 2025 14:30:41 +0000 (0:00:01.149) 0:00:08.075 ********* 2025-05-14 14:43:29.290949 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.290960 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.290970 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.290981 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.290991 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.291002 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.291013 | orchestrator | 2025-05-14 14:43:29.291023 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 14:43:29.291034 | orchestrator | Wednesday 14 May 2025 14:30:42 +0000 (0:00:01.306) 0:00:09.382 ********* 2025-05-14 14:43:29.291045 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.291055 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.291065 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.291076 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.291087 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.291097 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.291108 | orchestrator | 2025-05-14 14:43:29.291119 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 14:43:29.291130 | orchestrator | Wednesday 14 May 2025 14:30:43 +0000 (0:00:00.811) 0:00:10.194 ********* 2025-05-14 14:43:29.291141 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.291151 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.291162 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.291172 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.291182 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.291193 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.291204 | orchestrator | 2025-05-14 14:43:29.291214 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 14:43:29.291226 | orchestrator | Wednesday 14 May 2025 14:30:44 +0000 (0:00:01.151) 0:00:11.345 ********* 2025-05-14 14:43:29.291237 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.291377 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.291389 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.291400 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.291410 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.291421 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.291432 | orchestrator | 2025-05-14 14:43:29.291443 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 14:43:29.291453 | orchestrator | Wednesday 14 May 2025 14:30:46 +0000 (0:00:01.196) 0:00:12.541 ********* 2025-05-14 14:43:29.291464 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.291474 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.291485 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.291496 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.291515 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.291534 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.291553 | orchestrator | 2025-05-14 14:43:29.291592 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 14:43:29.291651 | orchestrator | Wednesday 14 May 2025 14:30:46 +0000 (0:00:00.885) 0:00:13.427 ********* 2025-05-14 14:43:29.291672 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.291691 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.291711 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.291729 | orchestrator | 2025-05-14 14:43:29.291743 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 14:43:29.291754 | orchestrator | Wednesday 14 May 2025 14:30:47 +0000 (0:00:00.779) 0:00:14.207 ********* 2025-05-14 14:43:29.291776 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.291787 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.291797 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.291808 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.291819 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.291829 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.291840 | orchestrator | 2025-05-14 14:43:29.291851 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 14:43:29.291862 | orchestrator | Wednesday 14 May 2025 14:30:49 +0000 (0:00:01.832) 0:00:16.039 ********* 2025-05-14 14:43:29.291873 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.291883 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.291894 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.291904 | orchestrator | 2025-05-14 14:43:29.291915 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 14:43:29.291926 | orchestrator | Wednesday 14 May 2025 14:30:52 +0000 (0:00:02.863) 0:00:18.903 ********* 2025-05-14 14:43:29.291936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.291947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.291958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.291968 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.291979 | orchestrator | 2025-05-14 14:43:29.292026 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 14:43:29.292038 | orchestrator | Wednesday 14 May 2025 14:30:52 +0000 (0:00:00.453) 0:00:19.356 ********* 2025-05-14 14:43:29.292052 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292088 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.292099 | orchestrator | 2025-05-14 14:43:29.292110 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 14:43:29.292120 | orchestrator | Wednesday 14 May 2025 14:30:53 +0000 (0:00:00.853) 0:00:20.210 ********* 2025-05-14 14:43:29.292133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292148 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292159 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292177 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.292188 | orchestrator | 2025-05-14 14:43:29.292199 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 14:43:29.292331 | orchestrator | Wednesday 14 May 2025 14:30:53 +0000 (0:00:00.214) 0:00:20.424 ********* 2025-05-14 14:43:29.292362 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 14:30:50.268176', 'end': '2025-05-14 14:30:50.558018', 'delta': '0:00:00.289842', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292388 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 14:30:51.074783', 'end': '2025-05-14 14:30:51.320266', 'delta': '0:00:00.245483', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292408 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 14:30:51.913607', 'end': '2025-05-14 14:30:52.172959', 'delta': '0:00:00.259352', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 14:43:29.292430 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.292450 | orchestrator | 2025-05-14 14:43:29.292470 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 14:43:29.292490 | orchestrator | Wednesday 14 May 2025 14:30:54 +0000 (0:00:00.161) 0:00:20.586 ********* 2025-05-14 14:43:29.292508 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.292526 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.292545 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.292564 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.292581 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.292598 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.292635 | orchestrator | 2025-05-14 14:43:29.292650 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 14:43:29.292668 | orchestrator | Wednesday 14 May 2025 14:30:55 +0000 (0:00:01.102) 0:00:21.689 ********* 2025-05-14 14:43:29.292687 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.292706 | orchestrator | 2025-05-14 14:43:29.292724 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 14:43:29.292741 | orchestrator | Wednesday 14 May 2025 14:30:55 +0000 (0:00:00.619) 0:00:22.308 ********* 2025-05-14 14:43:29.292759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.292778 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.292812 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.292833 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.292851 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.292872 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.292891 | orchestrator | 2025-05-14 14:43:29.292910 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 14:43:29.292928 | orchestrator | Wednesday 14 May 2025 14:30:56 +0000 (0:00:00.583) 0:00:22.891 ********* 2025-05-14 14:43:29.292945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.292956 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.292967 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.292977 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.292988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.292998 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293009 | orchestrator | 2025-05-14 14:43:29.293020 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:43:29.293031 | orchestrator | Wednesday 14 May 2025 14:30:57 +0000 (0:00:00.848) 0:00:23.739 ********* 2025-05-14 14:43:29.293041 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293052 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293062 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293073 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293083 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293094 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293104 | orchestrator | 2025-05-14 14:43:29.293115 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 14:43:29.293126 | orchestrator | Wednesday 14 May 2025 14:30:57 +0000 (0:00:00.767) 0:00:24.507 ********* 2025-05-14 14:43:29.293202 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293216 | orchestrator | 2025-05-14 14:43:29.293227 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 14:43:29.293245 | orchestrator | Wednesday 14 May 2025 14:30:58 +0000 (0:00:00.152) 0:00:24.659 ********* 2025-05-14 14:43:29.293257 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293268 | orchestrator | 2025-05-14 14:43:29.293279 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:43:29.293289 | orchestrator | Wednesday 14 May 2025 14:30:58 +0000 (0:00:00.740) 0:00:25.400 ********* 2025-05-14 14:43:29.293300 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293311 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293322 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293332 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293343 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293353 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293364 | orchestrator | 2025-05-14 14:43:29.293375 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 14:43:29.293385 | orchestrator | Wednesday 14 May 2025 14:30:59 +0000 (0:00:00.569) 0:00:25.969 ********* 2025-05-14 14:43:29.293396 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293407 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293417 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293439 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293449 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293554 | orchestrator | 2025-05-14 14:43:29.293566 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 14:43:29.293576 | orchestrator | Wednesday 14 May 2025 14:31:00 +0000 (0:00:00.961) 0:00:26.931 ********* 2025-05-14 14:43:29.293588 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293598 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293634 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293646 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293664 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293697 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293717 | orchestrator | 2025-05-14 14:43:29.293736 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 14:43:29.293756 | orchestrator | Wednesday 14 May 2025 14:31:01 +0000 (0:00:00.757) 0:00:27.688 ********* 2025-05-14 14:43:29.293777 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293797 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293816 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293835 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293852 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293867 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293878 | orchestrator | 2025-05-14 14:43:29.293889 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 14:43:29.293900 | orchestrator | Wednesday 14 May 2025 14:31:02 +0000 (0:00:01.330) 0:00:29.018 ********* 2025-05-14 14:43:29.293911 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.293922 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.293932 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.293943 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.293953 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.293964 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.293975 | orchestrator | 2025-05-14 14:43:29.293986 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 14:43:29.293996 | orchestrator | Wednesday 14 May 2025 14:31:03 +0000 (0:00:01.237) 0:00:30.256 ********* 2025-05-14 14:43:29.294007 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.294067 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.294082 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.294093 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.294104 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.294115 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.294125 | orchestrator | 2025-05-14 14:43:29.294136 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 14:43:29.294147 | orchestrator | Wednesday 14 May 2025 14:31:04 +0000 (0:00:00.993) 0:00:31.250 ********* 2025-05-14 14:43:29.294158 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.294169 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.294214 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.294226 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.294237 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.294247 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.294276 | orchestrator | 2025-05-14 14:43:29.294287 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 14:43:29.294298 | orchestrator | Wednesday 14 May 2025 14:31:05 +0000 (0:00:00.738) 0:00:31.988 ********* 2025-05-14 14:43:29.294310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.294747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.294774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.294957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0', 'scsi-SQEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part1', 'scsi-SQEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part14', 'scsi-SQEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part15', 'scsi-SQEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part16', 'scsi-SQEMU_QEMU_HARDDISK_19369105-bfec-4360-a374-d2f34f1753a0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.294978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.294998 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.295018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295134 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.295146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f', 'scsi-SQEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b49ba43d-495c-4e97-94d5-24ddaafe687f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.295300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.295313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd-osd--block--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd', 'dm-uuid-LVM-oe3XGIkJvHmuTCqQQRTGeCAkYgXQgXzd2RsjSfffTM4F5lVpw6hFc3ttiSpRdAV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295395 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.295483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46afb65a--1642--5955--80d8--115babed40cc-osd--block--46afb65a--1642--5955--80d8--115babed40cc', 'dm-uuid-LVM-cRQSoffzeBotqALG8g4q1BtUZeu0J29ltwdTcO4fGybmK06xTtxeLfLzLfxo9j4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6-osd--block--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6', 'dm-uuid-LVM-vcT5VQ7OUb1W830jE1EoSTWdQqfqUOnH4im0PxSi38kU4Xy2HwDjl335trLf3UOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6248da54--4321--5f95--9f37--ef0f81563cc8-osd--block--6248da54--4321--5f95--9f37--ef0f81563cc8', 'dm-uuid-LVM-XOVX0EsNquAUng9MsJj6p0l2DeYaxh8TzqDWDfU2GynMgn0Af0ekd82IHodU8i5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part1', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part14', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part15', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part16', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.295934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.295948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd-osd--block--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaXtNO-vR2R-lKjU-8sZ1-xvgA-ZmuC-tIHpBb', 'scsi-0QEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4', 'scsi-SQEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.295970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46afb65a--1642--5955--80d8--115babed40cc-osd--block--46afb65a--1642--5955--80d8--115babed40cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvOutx-C5Ro-07d3-5Kq6-8Pik-TgN0-grLCqe', 'scsi-0QEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e', 'scsi-SQEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4', 'scsi-SQEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296099 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.296110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6-osd--block--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oxqQbH-cdgO-Wj5n-JI4t-RKdE-A9iz-EbXT3c', 'scsi-0QEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1', 'scsi-SQEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6248da54--4321--5f95--9f37--ef0f81563cc8-osd--block--6248da54--4321--5f95--9f37--ef0f81563cc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yeQNr2-HpDv-wSbK-oErd-fkvR-pHdP-UTSgbM', 'scsi-0QEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728', 'scsi-SQEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194', 'scsi-SQEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296400 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.296431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dde3cc5c--c032--592e--96b0--b740b8614a8d-osd--block--dde3cc5c--c032--592e--96b0--b740b8614a8d', 'dm-uuid-LVM-gDtDio710LxXMnniF8MCCubUAbaCS8lJf0GbSZpnRCALMbe8pjMxZ0b9LbRij3gi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5402478b--0937--58a5--a80f--00ed6e381d0d-osd--block--5402478b--0937--58a5--a80f--00ed6e381d0d', 'dm-uuid-LVM-3Mlz0P66Tjdwlu1DlY9xpPQCdvzGvTq1ozttby7WifLg5gXjuo4MSBXPdCe2HT07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:43:29.296691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dde3cc5c--c032--592e--96b0--b740b8614a8d-osd--block--dde3cc5c--c032--592e--96b0--b740b8614a8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qF2iu1-tNFH-bV6n-x1bQ-1fPY-U0wy-D2l4PA', 'scsi-0QEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640', 'scsi-SQEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5402478b--0937--58a5--a80f--00ed6e381d0d-osd--block--5402478b--0937--58a5--a80f--00ed6e381d0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cPZ6GG-hKOT-fTQ0-JH3B-bmXl-XIUJ-zXQGfW', 'scsi-0QEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb', 'scsi-SQEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb', 'scsi-SQEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:43:29.296779 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.296789 | orchestrator | 2025-05-14 14:43:29.296799 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 14:43:29.296809 | orchestrator | Wednesday 14 May 2025 14:31:07 +0000 (0:00:01.764) 0:00:33.752 ********* 2025-05-14 14:43:29.296819 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.296829 | orchestrator | 2025-05-14 14:43:29.296877 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 14:43:29.296888 | orchestrator | Wednesday 14 May 2025 14:31:07 +0000 (0:00:00.319) 0:00:34.072 ********* 2025-05-14 14:43:29.296898 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.296908 | orchestrator | 2025-05-14 14:43:29.296949 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 14:43:29.296960 | orchestrator | Wednesday 14 May 2025 14:31:07 +0000 (0:00:00.155) 0:00:34.228 ********* 2025-05-14 14:43:29.296969 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.296979 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.297113 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.297135 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.297168 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.297188 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.297207 | orchestrator | 2025-05-14 14:43:29.297226 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 14:43:29.297246 | orchestrator | Wednesday 14 May 2025 14:31:08 +0000 (0:00:00.901) 0:00:35.130 ********* 2025-05-14 14:43:29.297265 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.297283 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.297300 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.297310 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.297320 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.297329 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.297339 | orchestrator | 2025-05-14 14:43:29.297348 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 14:43:29.297363 | orchestrator | Wednesday 14 May 2025 14:31:10 +0000 (0:00:01.567) 0:00:36.697 ********* 2025-05-14 14:43:29.297380 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.297397 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.297414 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.297430 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.297448 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.297464 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.297482 | orchestrator | 2025-05-14 14:43:29.297498 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:43:29.297516 | orchestrator | Wednesday 14 May 2025 14:31:10 +0000 (0:00:00.789) 0:00:37.487 ********* 2025-05-14 14:43:29.297533 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.297550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.297560 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.297569 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.297579 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.297589 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.297598 | orchestrator | 2025-05-14 14:43:29.297670 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:43:29.297683 | orchestrator | Wednesday 14 May 2025 14:31:12 +0000 (0:00:01.171) 0:00:38.658 ********* 2025-05-14 14:43:29.297693 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.297702 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.297712 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.297722 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.297731 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.297741 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.297750 | orchestrator | 2025-05-14 14:43:29.297760 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:43:29.297769 | orchestrator | Wednesday 14 May 2025 14:31:12 +0000 (0:00:00.853) 0:00:39.511 ********* 2025-05-14 14:43:29.297778 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.297788 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.297798 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.297807 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.297816 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.297826 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.297835 | orchestrator | 2025-05-14 14:43:29.297845 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:43:29.297854 | orchestrator | Wednesday 14 May 2025 14:31:14 +0000 (0:00:01.408) 0:00:40.920 ********* 2025-05-14 14:43:29.297939 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.297948 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.297956 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.297963 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.297971 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.297979 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.297986 | orchestrator | 2025-05-14 14:43:29.297994 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 14:43:29.298011 | orchestrator | Wednesday 14 May 2025 14:31:15 +0000 (0:00:01.278) 0:00:42.198 ********* 2025-05-14 14:43:29.298046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.298063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.298072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 14:43:29.298086 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 14:43:29.298094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 14:43:29.298102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 14:43:29.298109 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.298117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.298126 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 14:43:29.298141 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.298155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 14:43:29.298170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:43:29.298186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:43:29.298201 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.298216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:43:29.298232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:43:29.298247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:43:29.298260 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.298274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:43:29.298289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:43:29.298303 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.298312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:43:29.298319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:43:29.298327 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.298334 | orchestrator | 2025-05-14 14:43:29.298342 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 14:43:29.298354 | orchestrator | Wednesday 14 May 2025 14:31:18 +0000 (0:00:02.882) 0:00:45.081 ********* 2025-05-14 14:43:29.298367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.298381 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 14:43:29.298395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.298408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 14:43:29.298421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 14:43:29.298436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.298449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 14:43:29.298465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 14:43:29.298479 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.298493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:43:29.298502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 14:43:29.298509 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.298517 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.298525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:43:29.298533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:43:29.298541 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:43:29.298548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:43:29.298556 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.298564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:43:29.298580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:43:29.298587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:43:29.298595 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.298603 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:43:29.298635 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.298643 | orchestrator | 2025-05-14 14:43:29.298651 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 14:43:29.298659 | orchestrator | Wednesday 14 May 2025 14:31:20 +0000 (0:00:02.239) 0:00:47.321 ********* 2025-05-14 14:43:29.298667 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.298709 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-14 14:43:29.298718 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-14 14:43:29.298737 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 14:43:29.298745 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 14:43:29.298753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 14:43:29.298761 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 14:43:29.298769 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-14 14:43:29.298777 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 14:43:29.298784 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-14 14:43:29.298792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 14:43:29.298800 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 14:43:29.298807 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 14:43:29.298817 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-14 14:43:29.298831 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-14 14:43:29.298845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 14:43:29.298858 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 14:43:29.298871 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 14:43:29.298884 | orchestrator | 2025-05-14 14:43:29.298893 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 14:43:29.298909 | orchestrator | Wednesday 14 May 2025 14:31:25 +0000 (0:00:04.709) 0:00:52.030 ********* 2025-05-14 14:43:29.298923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.298931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.298939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.298946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 14:43:29.298954 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 14:43:29.298962 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.298969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 14:43:29.298977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 14:43:29.298985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 14:43:29.298992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 14:43:29.299000 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.299008 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.299015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:43:29.299023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:43:29.299030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:43:29.299038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:43:29.299045 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:43:29.299053 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.299061 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:43:29.299076 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.299083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:43:29.299187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:43:29.299195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:43:29.299203 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.299214 | orchestrator | 2025-05-14 14:43:29.299228 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 14:43:29.299242 | orchestrator | Wednesday 14 May 2025 14:31:26 +0000 (0:00:00.980) 0:00:53.010 ********* 2025-05-14 14:43:29.299256 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.299271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.299285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.299299 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 14:43:29.299314 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 14:43:29.299328 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.299342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 14:43:29.299355 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 14:43:29.299363 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 14:43:29.299371 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 14:43:29.299379 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.299386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:43:29.299394 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.299402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:43:29.299410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:43:29.299418 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:43:29.299426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:43:29.299439 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.299452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:43:29.299466 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.299479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:43:29.299493 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:43:29.299508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:43:29.299522 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.299536 | orchestrator | 2025-05-14 14:43:29.299551 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 14:43:29.299564 | orchestrator | Wednesday 14 May 2025 14:31:27 +0000 (0:00:01.223) 0:00:54.234 ********* 2025-05-14 14:43:29.299577 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 14:43:29.299585 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:43:29.299593 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:43:29.299601 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:43:29.299635 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-14 14:43:29.299644 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:43:29.299651 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:43:29.299659 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:43:29.299667 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-14 14:43:29.299691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:43:29.299706 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:43:29.299714 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:43:29.299722 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.299729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:43:29.299737 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:43:29.299745 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:43:29.299753 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.299760 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:43:29.299768 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:43:29.299775 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:43:29.299783 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.299791 | orchestrator | 2025-05-14 14:43:29.299799 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 14:43:29.299807 | orchestrator | Wednesday 14 May 2025 14:31:28 +0000 (0:00:01.216) 0:00:55.451 ********* 2025-05-14 14:43:29.299843 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.299870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.299879 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.299887 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.299895 | orchestrator | 2025-05-14 14:43:29.299903 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.299911 | orchestrator | Wednesday 14 May 2025 14:31:30 +0000 (0:00:01.340) 0:00:56.791 ********* 2025-05-14 14:43:29.299919 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.299927 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.299935 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.299993 | orchestrator | 2025-05-14 14:43:29.300001 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.300009 | orchestrator | Wednesday 14 May 2025 14:31:30 +0000 (0:00:00.661) 0:00:57.453 ********* 2025-05-14 14:43:29.300017 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300025 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300033 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300041 | orchestrator | 2025-05-14 14:43:29.300054 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.300069 | orchestrator | Wednesday 14 May 2025 14:31:31 +0000 (0:00:00.868) 0:00:58.322 ********* 2025-05-14 14:43:29.300083 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300097 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300112 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300126 | orchestrator | 2025-05-14 14:43:29.300141 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.300156 | orchestrator | Wednesday 14 May 2025 14:31:32 +0000 (0:00:00.613) 0:00:58.935 ********* 2025-05-14 14:43:29.300169 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.300183 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.300197 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.300211 | orchestrator | 2025-05-14 14:43:29.300220 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.300228 | orchestrator | Wednesday 14 May 2025 14:31:33 +0000 (0:00:00.948) 0:00:59.884 ********* 2025-05-14 14:43:29.300243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.300251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.300259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.300266 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300274 | orchestrator | 2025-05-14 14:43:29.300282 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.300289 | orchestrator | Wednesday 14 May 2025 14:31:34 +0000 (0:00:00.690) 0:01:00.574 ********* 2025-05-14 14:43:29.300297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.300305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.300313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.300320 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300328 | orchestrator | 2025-05-14 14:43:29.300336 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.300344 | orchestrator | Wednesday 14 May 2025 14:31:34 +0000 (0:00:00.505) 0:01:01.080 ********* 2025-05-14 14:43:29.300353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.300360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.300368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.300376 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300384 | orchestrator | 2025-05-14 14:43:29.300392 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.300399 | orchestrator | Wednesday 14 May 2025 14:31:35 +0000 (0:00:00.887) 0:01:01.968 ********* 2025-05-14 14:43:29.300407 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.300415 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.300423 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.300431 | orchestrator | 2025-05-14 14:43:29.300439 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.300453 | orchestrator | Wednesday 14 May 2025 14:31:35 +0000 (0:00:00.431) 0:01:02.399 ********* 2025-05-14 14:43:29.300463 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 14:43:29.300483 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 14:43:29.300496 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 14:43:29.300510 | orchestrator | 2025-05-14 14:43:29.300523 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.300537 | orchestrator | Wednesday 14 May 2025 14:31:36 +0000 (0:00:01.069) 0:01:03.469 ********* 2025-05-14 14:43:29.300545 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300552 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300560 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300568 | orchestrator | 2025-05-14 14:43:29.300575 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.300583 | orchestrator | Wednesday 14 May 2025 14:31:37 +0000 (0:00:00.494) 0:01:03.964 ********* 2025-05-14 14:43:29.300591 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300599 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300656 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300667 | orchestrator | 2025-05-14 14:43:29.300675 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.300683 | orchestrator | Wednesday 14 May 2025 14:31:38 +0000 (0:00:00.627) 0:01:04.591 ********* 2025-05-14 14:43:29.300691 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.300699 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300706 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.300714 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300722 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.300729 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300758 | orchestrator | 2025-05-14 14:43:29.300767 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.300775 | orchestrator | Wednesday 14 May 2025 14:31:38 +0000 (0:00:00.626) 0:01:05.217 ********* 2025-05-14 14:43:29.300783 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.300791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300799 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.300807 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300815 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.300823 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300831 | orchestrator | 2025-05-14 14:43:29.300839 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.300846 | orchestrator | Wednesday 14 May 2025 14:31:39 +0000 (0:00:00.708) 0:01:05.926 ********* 2025-05-14 14:43:29.300854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.300862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.300870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.300877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.300885 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.300892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.300900 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.300908 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.300915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.300923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.300931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.300938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.300946 | orchestrator | 2025-05-14 14:43:29.300954 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 14:43:29.300962 | orchestrator | Wednesday 14 May 2025 14:31:40 +0000 (0:00:01.272) 0:01:07.199 ********* 2025-05-14 14:43:29.300970 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.300977 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.300985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.300993 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301000 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301008 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301016 | orchestrator | 2025-05-14 14:43:29.301023 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 14:43:29.301031 | orchestrator | Wednesday 14 May 2025 14:31:41 +0000 (0:00:00.977) 0:01:08.176 ********* 2025-05-14 14:43:29.301039 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.301046 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.301052 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.301058 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 14:43:29.301065 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:43:29.301072 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:43:29.301079 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:43:29.301085 | orchestrator | 2025-05-14 14:43:29.301092 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 14:43:29.301106 | orchestrator | Wednesday 14 May 2025 14:31:42 +0000 (0:00:01.011) 0:01:09.188 ********* 2025-05-14 14:43:29.301113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.301126 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.301137 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.301144 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 14:43:29.301151 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:43:29.301158 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:43:29.301164 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:43:29.301171 | orchestrator | 2025-05-14 14:43:29.301177 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.301184 | orchestrator | Wednesday 14 May 2025 14:31:44 +0000 (0:00:01.718) 0:01:10.906 ********* 2025-05-14 14:43:29.301191 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.301199 | orchestrator | 2025-05-14 14:43:29.301206 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.301212 | orchestrator | Wednesday 14 May 2025 14:31:45 +0000 (0:00:01.185) 0:01:12.092 ********* 2025-05-14 14:43:29.301219 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.301226 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.301232 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301239 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.301245 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301252 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301258 | orchestrator | 2025-05-14 14:43:29.301265 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.301271 | orchestrator | Wednesday 14 May 2025 14:31:46 +0000 (0:00:01.007) 0:01:13.100 ********* 2025-05-14 14:43:29.301278 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301284 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301291 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301297 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.301304 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.301311 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.301317 | orchestrator | 2025-05-14 14:43:29.301324 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.301330 | orchestrator | Wednesday 14 May 2025 14:31:47 +0000 (0:00:01.382) 0:01:14.482 ********* 2025-05-14 14:43:29.301337 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301343 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301350 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301356 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.301363 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.301370 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.301377 | orchestrator | 2025-05-14 14:43:29.301383 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.301390 | orchestrator | Wednesday 14 May 2025 14:31:49 +0000 (0:00:01.238) 0:01:15.721 ********* 2025-05-14 14:43:29.301396 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301403 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301409 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301416 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.301422 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.301429 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.301435 | orchestrator | 2025-05-14 14:43:29.301442 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.301448 | orchestrator | Wednesday 14 May 2025 14:31:50 +0000 (0:00:00.959) 0:01:16.680 ********* 2025-05-14 14:43:29.301459 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.301466 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.301473 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301479 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.301486 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301492 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301499 | orchestrator | 2025-05-14 14:43:29.301505 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.301512 | orchestrator | Wednesday 14 May 2025 14:31:51 +0000 (0:00:01.488) 0:01:18.169 ********* 2025-05-14 14:43:29.301519 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301525 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301532 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301538 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301544 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301551 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301558 | orchestrator | 2025-05-14 14:43:29.301564 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.301571 | orchestrator | Wednesday 14 May 2025 14:31:52 +0000 (0:00:00.589) 0:01:18.759 ********* 2025-05-14 14:43:29.301577 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301584 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301590 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301597 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301603 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301631 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301638 | orchestrator | 2025-05-14 14:43:29.301645 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.301651 | orchestrator | Wednesday 14 May 2025 14:31:52 +0000 (0:00:00.695) 0:01:19.454 ********* 2025-05-14 14:43:29.301658 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301664 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301671 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301677 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301683 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301690 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301696 | orchestrator | 2025-05-14 14:43:29.301703 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.301710 | orchestrator | Wednesday 14 May 2025 14:31:53 +0000 (0:00:00.797) 0:01:20.252 ********* 2025-05-14 14:43:29.301721 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301728 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301734 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301741 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301748 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301754 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301761 | orchestrator | 2025-05-14 14:43:29.301767 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.301774 | orchestrator | Wednesday 14 May 2025 14:31:54 +0000 (0:00:00.918) 0:01:21.170 ********* 2025-05-14 14:43:29.301780 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301787 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301793 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301800 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301806 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301813 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301819 | orchestrator | 2025-05-14 14:43:29.301825 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.301832 | orchestrator | Wednesday 14 May 2025 14:31:55 +0000 (0:00:00.636) 0:01:21.807 ********* 2025-05-14 14:43:29.301839 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.301846 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.301852 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.301864 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.301871 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.301877 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.301884 | orchestrator | 2025-05-14 14:43:29.301891 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.301897 | orchestrator | Wednesday 14 May 2025 14:31:56 +0000 (0:00:01.564) 0:01:23.372 ********* 2025-05-14 14:43:29.301904 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.301910 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.301917 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.301923 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301930 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301936 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.301943 | orchestrator | 2025-05-14 14:43:29.301949 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.301956 | orchestrator | Wednesday 14 May 2025 14:31:57 +0000 (0:00:00.634) 0:01:24.006 ********* 2025-05-14 14:43:29.301962 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.301969 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.301975 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.301982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.301988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.301994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.302001 | orchestrator | 2025-05-14 14:43:29.302007 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.302682 | orchestrator | Wednesday 14 May 2025 14:31:58 +0000 (0:00:00.799) 0:01:24.806 ********* 2025-05-14 14:43:29.302771 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.302791 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.302798 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.302805 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.302811 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.302818 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.302825 | orchestrator | 2025-05-14 14:43:29.302832 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.302839 | orchestrator | Wednesday 14 May 2025 14:31:58 +0000 (0:00:00.676) 0:01:25.483 ********* 2025-05-14 14:43:29.302846 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.302852 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.302859 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.302865 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.302872 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.302878 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.302885 | orchestrator | 2025-05-14 14:43:29.302891 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.302898 | orchestrator | Wednesday 14 May 2025 14:31:59 +0000 (0:00:00.967) 0:01:26.451 ********* 2025-05-14 14:43:29.302905 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.302911 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.302918 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.302924 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.302931 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.302937 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.302944 | orchestrator | 2025-05-14 14:43:29.302950 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.302957 | orchestrator | Wednesday 14 May 2025 14:32:00 +0000 (0:00:00.602) 0:01:27.053 ********* 2025-05-14 14:43:29.302963 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.302970 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.302976 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.302982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.302989 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.302995 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303002 | orchestrator | 2025-05-14 14:43:29.303016 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.303022 | orchestrator | Wednesday 14 May 2025 14:32:01 +0000 (0:00:00.829) 0:01:27.882 ********* 2025-05-14 14:43:29.303029 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303036 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303042 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303049 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303055 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303062 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303069 | orchestrator | 2025-05-14 14:43:29.303075 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.303082 | orchestrator | Wednesday 14 May 2025 14:32:01 +0000 (0:00:00.589) 0:01:28.472 ********* 2025-05-14 14:43:29.303089 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.303095 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.303102 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.303108 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303115 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303121 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303128 | orchestrator | 2025-05-14 14:43:29.303135 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.303226 | orchestrator | Wednesday 14 May 2025 14:32:02 +0000 (0:00:00.804) 0:01:29.277 ********* 2025-05-14 14:43:29.303237 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.303249 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.303255 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.303262 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.303268 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.303275 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.303281 | orchestrator | 2025-05-14 14:43:29.303288 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.303295 | orchestrator | Wednesday 14 May 2025 14:32:03 +0000 (0:00:00.736) 0:01:30.013 ********* 2025-05-14 14:43:29.303301 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303308 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303314 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303321 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303327 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303334 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303340 | orchestrator | 2025-05-14 14:43:29.303347 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.303353 | orchestrator | Wednesday 14 May 2025 14:32:04 +0000 (0:00:01.389) 0:01:31.403 ********* 2025-05-14 14:43:29.303360 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303366 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303372 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303379 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303385 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303392 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303398 | orchestrator | 2025-05-14 14:43:29.303405 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.303411 | orchestrator | Wednesday 14 May 2025 14:32:05 +0000 (0:00:00.802) 0:01:32.205 ********* 2025-05-14 14:43:29.303418 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303424 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303431 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303438 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303444 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303451 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303460 | orchestrator | 2025-05-14 14:43:29.303471 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.303482 | orchestrator | Wednesday 14 May 2025 14:32:06 +0000 (0:00:00.831) 0:01:33.037 ********* 2025-05-14 14:43:29.303493 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303513 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303524 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303534 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303545 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303556 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303568 | orchestrator | 2025-05-14 14:43:29.303575 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.303581 | orchestrator | Wednesday 14 May 2025 14:32:07 +0000 (0:00:00.631) 0:01:33.669 ********* 2025-05-14 14:43:29.303588 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303594 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303601 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303649 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303658 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303664 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303671 | orchestrator | 2025-05-14 14:43:29.303677 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.303684 | orchestrator | Wednesday 14 May 2025 14:32:07 +0000 (0:00:00.854) 0:01:34.523 ********* 2025-05-14 14:43:29.303690 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303697 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303703 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303710 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303716 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303723 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303729 | orchestrator | 2025-05-14 14:43:29.303736 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.303742 | orchestrator | Wednesday 14 May 2025 14:32:08 +0000 (0:00:00.603) 0:01:35.126 ********* 2025-05-14 14:43:29.303749 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303755 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303762 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303768 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303775 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303781 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303788 | orchestrator | 2025-05-14 14:43:29.303794 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.303801 | orchestrator | Wednesday 14 May 2025 14:32:09 +0000 (0:00:00.809) 0:01:35.936 ********* 2025-05-14 14:43:29.303808 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303814 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303821 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303827 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303834 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303841 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303848 | orchestrator | 2025-05-14 14:43:29.303856 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.303864 | orchestrator | Wednesday 14 May 2025 14:32:10 +0000 (0:00:00.626) 0:01:36.562 ********* 2025-05-14 14:43:29.303871 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303878 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.303885 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.303892 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.303900 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.303907 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.303914 | orchestrator | 2025-05-14 14:43:29.303922 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.303930 | orchestrator | Wednesday 14 May 2025 14:32:10 +0000 (0:00:00.834) 0:01:37.397 ********* 2025-05-14 14:43:29.303937 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.303944 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304022 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304033 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304045 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304053 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304061 | orchestrator | 2025-05-14 14:43:29.304069 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.304076 | orchestrator | Wednesday 14 May 2025 14:32:11 +0000 (0:00:00.617) 0:01:38.015 ********* 2025-05-14 14:43:29.304084 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304092 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304100 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304107 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304118 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304130 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304141 | orchestrator | 2025-05-14 14:43:29.304153 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.304165 | orchestrator | Wednesday 14 May 2025 14:32:12 +0000 (0:00:00.923) 0:01:38.939 ********* 2025-05-14 14:43:29.304176 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304183 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304189 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304196 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304203 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304210 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304217 | orchestrator | 2025-05-14 14:43:29.304223 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.304230 | orchestrator | Wednesday 14 May 2025 14:32:13 +0000 (0:00:00.885) 0:01:39.824 ********* 2025-05-14 14:43:29.304236 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.304244 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.304254 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304264 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.304274 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.304284 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.304294 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.304302 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304309 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.304315 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304321 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.304327 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.304333 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.304338 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304344 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304350 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.304356 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.304362 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304369 | orchestrator | 2025-05-14 14:43:29.304375 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.304381 | orchestrator | Wednesday 14 May 2025 14:32:14 +0000 (0:00:01.076) 0:01:40.901 ********* 2025-05-14 14:43:29.304387 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 14:43:29.304393 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 14:43:29.304399 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304405 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 14:43:29.304411 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 14:43:29.304417 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304443 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 14:43:29.304450 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 14:43:29.304462 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304468 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 14:43:29.304474 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 14:43:29.304481 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304487 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 14:43:29.304493 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 14:43:29.304499 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304505 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 14:43:29.304511 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 14:43:29.304517 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304523 | orchestrator | 2025-05-14 14:43:29.304529 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.304536 | orchestrator | Wednesday 14 May 2025 14:32:15 +0000 (0:00:00.743) 0:01:41.645 ********* 2025-05-14 14:43:29.304542 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304548 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304554 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304560 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304566 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304572 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304578 | orchestrator | 2025-05-14 14:43:29.304584 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.304590 | orchestrator | Wednesday 14 May 2025 14:32:16 +0000 (0:00:01.145) 0:01:42.790 ********* 2025-05-14 14:43:29.304596 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304602 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304624 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304631 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304637 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304643 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304649 | orchestrator | 2025-05-14 14:43:29.304655 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.304722 | orchestrator | Wednesday 14 May 2025 14:32:16 +0000 (0:00:00.632) 0:01:43.422 ********* 2025-05-14 14:43:29.304731 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304748 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304754 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304760 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304766 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304772 | orchestrator | 2025-05-14 14:43:29.304778 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.304784 | orchestrator | Wednesday 14 May 2025 14:32:18 +0000 (0:00:01.169) 0:01:44.592 ********* 2025-05-14 14:43:29.304790 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304796 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304802 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304808 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304814 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304820 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304826 | orchestrator | 2025-05-14 14:43:29.304832 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.304838 | orchestrator | Wednesday 14 May 2025 14:32:18 +0000 (0:00:00.825) 0:01:45.418 ********* 2025-05-14 14:43:29.304844 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304850 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304856 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304862 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304873 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304880 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304886 | orchestrator | 2025-05-14 14:43:29.304892 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.304901 | orchestrator | Wednesday 14 May 2025 14:32:19 +0000 (0:00:00.838) 0:01:46.256 ********* 2025-05-14 14:43:29.304912 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.304922 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.304933 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.304943 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.304953 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.304962 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.304969 | orchestrator | 2025-05-14 14:43:29.304975 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.304981 | orchestrator | Wednesday 14 May 2025 14:32:20 +0000 (0:00:00.720) 0:01:46.977 ********* 2025-05-14 14:43:29.304987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.304993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.304999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.305005 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305011 | orchestrator | 2025-05-14 14:43:29.305017 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.305023 | orchestrator | Wednesday 14 May 2025 14:32:21 +0000 (0:00:00.654) 0:01:47.631 ********* 2025-05-14 14:43:29.305030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.305036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.305042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.305048 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305054 | orchestrator | 2025-05-14 14:43:29.305060 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.305067 | orchestrator | Wednesday 14 May 2025 14:32:22 +0000 (0:00:00.920) 0:01:48.552 ********* 2025-05-14 14:43:29.305073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.305079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.305085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.305091 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305097 | orchestrator | 2025-05-14 14:43:29.305103 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.305110 | orchestrator | Wednesday 14 May 2025 14:32:22 +0000 (0:00:00.412) 0:01:48.965 ********* 2025-05-14 14:43:29.305116 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305122 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305128 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305134 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305140 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305147 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305153 | orchestrator | 2025-05-14 14:43:29.305159 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.305165 | orchestrator | Wednesday 14 May 2025 14:32:23 +0000 (0:00:00.672) 0:01:49.637 ********* 2025-05-14 14:43:29.305171 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.305177 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.305183 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305189 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.305195 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305201 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.305207 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305213 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305220 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.305231 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305237 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.305243 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305249 | orchestrator | 2025-05-14 14:43:29.305255 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.305261 | orchestrator | Wednesday 14 May 2025 14:32:24 +0000 (0:00:01.177) 0:01:50.814 ********* 2025-05-14 14:43:29.305267 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305273 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305279 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305286 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305292 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305298 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305304 | orchestrator | 2025-05-14 14:43:29.305359 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.305372 | orchestrator | Wednesday 14 May 2025 14:32:24 +0000 (0:00:00.632) 0:01:51.447 ********* 2025-05-14 14:43:29.305379 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305385 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305391 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305397 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305403 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305409 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305415 | orchestrator | 2025-05-14 14:43:29.305421 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.305427 | orchestrator | Wednesday 14 May 2025 14:32:25 +0000 (0:00:00.814) 0:01:52.262 ********* 2025-05-14 14:43:29.305433 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.305439 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305445 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.305451 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305457 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.305463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305469 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.305475 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305481 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.305487 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305493 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.305499 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305506 | orchestrator | 2025-05-14 14:43:29.305512 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.305518 | orchestrator | Wednesday 14 May 2025 14:32:26 +0000 (0:00:00.792) 0:01:53.054 ********* 2025-05-14 14:43:29.305524 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305530 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305536 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305545 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.305556 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305567 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.305577 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305588 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.305599 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305625 | orchestrator | 2025-05-14 14:43:29.305633 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.305639 | orchestrator | Wednesday 14 May 2025 14:32:27 +0000 (0:00:00.821) 0:01:53.876 ********* 2025-05-14 14:43:29.305651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.305657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.305663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.305669 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305677 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:43:29.305687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:43:29.305697 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:43:29.305707 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:43:29.305717 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:43:29.305728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:43:29.305735 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.305747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.305753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.305759 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.305770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.305777 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.305783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305789 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.305801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.305807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.305813 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305819 | orchestrator | 2025-05-14 14:43:29.305825 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.305831 | orchestrator | Wednesday 14 May 2025 14:32:28 +0000 (0:00:01.565) 0:01:55.442 ********* 2025-05-14 14:43:29.305837 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305844 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305850 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305856 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.305861 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.305867 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.305873 | orchestrator | 2025-05-14 14:43:29.305879 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.305885 | orchestrator | Wednesday 14 May 2025 14:32:30 +0000 (0:00:01.230) 0:01:56.672 ********* 2025-05-14 14:43:29.305891 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.305897 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.305975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.305985 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.305996 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306002 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.306008 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306049 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.306058 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306064 | orchestrator | 2025-05-14 14:43:29.306070 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.306076 | orchestrator | Wednesday 14 May 2025 14:32:31 +0000 (0:00:01.268) 0:01:57.941 ********* 2025-05-14 14:43:29.306082 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306089 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306099 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306117 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306128 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306138 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306147 | orchestrator | 2025-05-14 14:43:29.306156 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.306165 | orchestrator | Wednesday 14 May 2025 14:32:32 +0000 (0:00:01.240) 0:01:59.181 ********* 2025-05-14 14:43:29.306173 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306183 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306193 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306205 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306214 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306221 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306227 | orchestrator | 2025-05-14 14:43:29.306233 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-14 14:43:29.306239 | orchestrator | Wednesday 14 May 2025 14:32:33 +0000 (0:00:01.268) 0:02:00.449 ********* 2025-05-14 14:43:29.306245 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.306251 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.306257 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.306263 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.306270 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.306276 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.306282 | orchestrator | 2025-05-14 14:43:29.306288 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-14 14:43:29.306294 | orchestrator | Wednesday 14 May 2025 14:32:35 +0000 (0:00:01.589) 0:02:02.038 ********* 2025-05-14 14:43:29.306300 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.306306 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.306312 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.306318 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.306324 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.306330 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.306336 | orchestrator | 2025-05-14 14:43:29.306342 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-14 14:43:29.306348 | orchestrator | Wednesday 14 May 2025 14:32:37 +0000 (0:00:02.126) 0:02:04.165 ********* 2025-05-14 14:43:29.306356 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.306363 | orchestrator | 2025-05-14 14:43:29.306369 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-14 14:43:29.306375 | orchestrator | Wednesday 14 May 2025 14:32:38 +0000 (0:00:01.166) 0:02:05.331 ********* 2025-05-14 14:43:29.306381 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306387 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306393 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306399 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306405 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306411 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306418 | orchestrator | 2025-05-14 14:43:29.306424 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-14 14:43:29.306430 | orchestrator | Wednesday 14 May 2025 14:32:39 +0000 (0:00:00.626) 0:02:05.957 ********* 2025-05-14 14:43:29.306436 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306442 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306448 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306454 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306460 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306466 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306472 | orchestrator | 2025-05-14 14:43:29.306478 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-14 14:43:29.306484 | orchestrator | Wednesday 14 May 2025 14:32:40 +0000 (0:00:00.923) 0:02:06.881 ********* 2025-05-14 14:43:29.306500 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306506 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306512 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306518 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306524 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306530 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 14:43:29.306537 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306543 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306549 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306555 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306639 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306654 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 14:43:29.306661 | orchestrator | 2025-05-14 14:43:29.306668 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-14 14:43:29.306674 | orchestrator | Wednesday 14 May 2025 14:32:41 +0000 (0:00:01.381) 0:02:08.262 ********* 2025-05-14 14:43:29.306680 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.306686 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.306692 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.306698 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.306704 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.306710 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.306717 | orchestrator | 2025-05-14 14:43:29.306723 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-14 14:43:29.306729 | orchestrator | Wednesday 14 May 2025 14:32:43 +0000 (0:00:01.427) 0:02:09.690 ********* 2025-05-14 14:43:29.306735 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306765 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306776 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306782 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306789 | orchestrator | 2025-05-14 14:43:29.306795 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-14 14:43:29.306801 | orchestrator | Wednesday 14 May 2025 14:32:44 +0000 (0:00:00.835) 0:02:10.525 ********* 2025-05-14 14:43:29.306807 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306813 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306819 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.306825 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.306831 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.306837 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.306843 | orchestrator | 2025-05-14 14:43:29.306849 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-14 14:43:29.306855 | orchestrator | Wednesday 14 May 2025 14:32:44 +0000 (0:00:00.942) 0:02:11.468 ********* 2025-05-14 14:43:29.306862 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.306868 | orchestrator | 2025-05-14 14:43:29.306874 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-14 14:43:29.306880 | orchestrator | Wednesday 14 May 2025 14:32:46 +0000 (0:00:01.271) 0:02:12.740 ********* 2025-05-14 14:43:29.306892 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.306899 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.306905 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.306911 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.306918 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.306924 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.306930 | orchestrator | 2025-05-14 14:43:29.306936 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-14 14:43:29.306942 | orchestrator | Wednesday 14 May 2025 14:33:28 +0000 (0:00:42.678) 0:02:55.418 ********* 2025-05-14 14:43:29.306948 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.306954 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.306961 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.306967 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.306973 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.306979 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.306985 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.306991 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.306997 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.307003 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.307009 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.307015 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307021 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.307027 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.307033 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.307040 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307046 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.307052 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.307058 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.307081 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307087 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 14:43:29.307093 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 14:43:29.307099 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 14:43:29.307106 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307112 | orchestrator | 2025-05-14 14:43:29.307118 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-14 14:43:29.307125 | orchestrator | Wednesday 14 May 2025 14:33:29 +0000 (0:00:00.717) 0:02:56.136 ********* 2025-05-14 14:43:29.307131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307194 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307220 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307232 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307239 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307245 | orchestrator | 2025-05-14 14:43:29.307251 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-14 14:43:29.307257 | orchestrator | Wednesday 14 May 2025 14:33:30 +0000 (0:00:00.651) 0:02:56.788 ********* 2025-05-14 14:43:29.307263 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307269 | orchestrator | 2025-05-14 14:43:29.307281 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-14 14:43:29.307287 | orchestrator | Wednesday 14 May 2025 14:33:30 +0000 (0:00:00.136) 0:02:56.924 ********* 2025-05-14 14:43:29.307293 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307299 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307308 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307318 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307329 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307339 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307345 | orchestrator | 2025-05-14 14:43:29.307351 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-14 14:43:29.307357 | orchestrator | Wednesday 14 May 2025 14:33:31 +0000 (0:00:00.763) 0:02:57.688 ********* 2025-05-14 14:43:29.307363 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307369 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307375 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307382 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307388 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307394 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307400 | orchestrator | 2025-05-14 14:43:29.307406 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-14 14:43:29.307412 | orchestrator | Wednesday 14 May 2025 14:33:31 +0000 (0:00:00.653) 0:02:58.341 ********* 2025-05-14 14:43:29.307418 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307424 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307430 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307436 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307442 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307448 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307453 | orchestrator | 2025-05-14 14:43:29.307459 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-14 14:43:29.307466 | orchestrator | Wednesday 14 May 2025 14:33:32 +0000 (0:00:00.821) 0:02:59.163 ********* 2025-05-14 14:43:29.307472 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.307478 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.307484 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.307490 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.307496 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.307502 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.307508 | orchestrator | 2025-05-14 14:43:29.307514 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-14 14:43:29.307520 | orchestrator | Wednesday 14 May 2025 14:33:34 +0000 (0:00:02.198) 0:03:01.361 ********* 2025-05-14 14:43:29.307526 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.307532 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.307538 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.307544 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.307550 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.307556 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.307562 | orchestrator | 2025-05-14 14:43:29.307568 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-14 14:43:29.307574 | orchestrator | Wednesday 14 May 2025 14:33:35 +0000 (0:00:00.745) 0:03:02.106 ********* 2025-05-14 14:43:29.307581 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.307589 | orchestrator | 2025-05-14 14:43:29.307595 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-14 14:43:29.307601 | orchestrator | Wednesday 14 May 2025 14:33:36 +0000 (0:00:00.969) 0:03:03.076 ********* 2025-05-14 14:43:29.307650 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307658 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307664 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307670 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307681 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307687 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307693 | orchestrator | 2025-05-14 14:43:29.307699 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-14 14:43:29.307705 | orchestrator | Wednesday 14 May 2025 14:33:37 +0000 (0:00:00.523) 0:03:03.599 ********* 2025-05-14 14:43:29.307711 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307718 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307724 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307729 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307734 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307740 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307745 | orchestrator | 2025-05-14 14:43:29.307750 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-14 14:43:29.307756 | orchestrator | Wednesday 14 May 2025 14:33:37 +0000 (0:00:00.660) 0:03:04.260 ********* 2025-05-14 14:43:29.307761 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307767 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307772 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307777 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307782 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307788 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307793 | orchestrator | 2025-05-14 14:43:29.307798 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-14 14:43:29.307804 | orchestrator | Wednesday 14 May 2025 14:33:38 +0000 (0:00:00.567) 0:03:04.827 ********* 2025-05-14 14:43:29.307809 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307814 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307821 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307826 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307884 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307892 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307898 | orchestrator | 2025-05-14 14:43:29.307908 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-14 14:43:29.307914 | orchestrator | Wednesday 14 May 2025 14:33:38 +0000 (0:00:00.640) 0:03:05.467 ********* 2025-05-14 14:43:29.307921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307927 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307932 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307938 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307944 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.307950 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.307956 | orchestrator | 2025-05-14 14:43:29.307962 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-14 14:43:29.307968 | orchestrator | Wednesday 14 May 2025 14:33:39 +0000 (0:00:00.479) 0:03:05.947 ********* 2025-05-14 14:43:29.307974 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.307980 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.307986 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.307992 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.307998 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.308004 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.308010 | orchestrator | 2025-05-14 14:43:29.308016 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-14 14:43:29.308022 | orchestrator | Wednesday 14 May 2025 14:33:40 +0000 (0:00:00.757) 0:03:06.704 ********* 2025-05-14 14:43:29.308028 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.308034 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.308040 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.308045 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.308051 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.308057 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.308063 | orchestrator | 2025-05-14 14:43:29.308078 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-14 14:43:29.308088 | orchestrator | Wednesday 14 May 2025 14:33:40 +0000 (0:00:00.613) 0:03:07.318 ********* 2025-05-14 14:43:29.308099 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.308106 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.308112 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.308118 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.308124 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.308130 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.308136 | orchestrator | 2025-05-14 14:43:29.308142 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.308149 | orchestrator | Wednesday 14 May 2025 14:33:41 +0000 (0:00:01.159) 0:03:08.478 ********* 2025-05-14 14:43:29.308155 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.308161 | orchestrator | 2025-05-14 14:43:29.308166 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-14 14:43:29.308172 | orchestrator | Wednesday 14 May 2025 14:33:43 +0000 (0:00:01.187) 0:03:09.665 ********* 2025-05-14 14:43:29.308177 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-14 14:43:29.308182 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-14 14:43:29.308188 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-14 14:43:29.308193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308200 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308209 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-14 14:43:29.308218 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-14 14:43:29.308226 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-14 14:43:29.308236 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308245 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308254 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308262 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308268 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308273 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-14 14:43:29.308278 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308284 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308289 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308294 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308299 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308305 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-14 14:43:29.308310 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308315 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308320 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308331 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308336 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-14 14:43:29.308341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308357 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308363 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-14 14:43:29.308450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308459 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308470 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308481 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-14 14:43:29.308486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308492 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308502 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308507 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308513 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-14 14:43:29.308520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308530 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308540 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308555 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 14:43:29.308561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308571 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308577 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308582 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308592 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 14:43:29.308598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308624 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308629 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308635 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 14:43:29.308640 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308645 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308651 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308656 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308662 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308667 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 14:43:29.308672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308689 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308700 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 14:43:29.308710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308716 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-14 14:43:29.308721 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308737 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 14:43:29.308743 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-14 14:43:29.308748 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-14 14:43:29.308753 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 14:43:29.308759 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-14 14:43:29.308764 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 14:43:29.308769 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 14:43:29.308774 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-14 14:43:29.308780 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-14 14:43:29.308785 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-14 14:43:29.308790 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-14 14:43:29.308839 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-14 14:43:29.308847 | orchestrator | 2025-05-14 14:43:29.308856 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.308862 | orchestrator | Wednesday 14 May 2025 14:33:49 +0000 (0:00:06.080) 0:03:15.745 ********* 2025-05-14 14:43:29.308867 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.308873 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.308878 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.308884 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.308889 | orchestrator | 2025-05-14 14:43:29.308894 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-14 14:43:29.308899 | orchestrator | Wednesday 14 May 2025 14:33:50 +0000 (0:00:01.472) 0:03:17.218 ********* 2025-05-14 14:43:29.308905 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308911 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308916 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308922 | orchestrator | 2025-05-14 14:43:29.308927 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-14 14:43:29.308936 | orchestrator | Wednesday 14 May 2025 14:33:51 +0000 (0:00:01.216) 0:03:18.434 ********* 2025-05-14 14:43:29.308946 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308956 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308962 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.308968 | orchestrator | 2025-05-14 14:43:29.308973 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.308984 | orchestrator | Wednesday 14 May 2025 14:33:53 +0000 (0:00:01.194) 0:03:19.629 ********* 2025-05-14 14:43:29.308989 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.308995 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309000 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309006 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.309011 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.309016 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.309022 | orchestrator | 2025-05-14 14:43:29.309027 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.309033 | orchestrator | Wednesday 14 May 2025 14:33:53 +0000 (0:00:00.852) 0:03:20.481 ********* 2025-05-14 14:43:29.309038 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309043 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309049 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309054 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.309059 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.309065 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.309070 | orchestrator | 2025-05-14 14:43:29.309076 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.309081 | orchestrator | Wednesday 14 May 2025 14:33:54 +0000 (0:00:00.721) 0:03:21.203 ********* 2025-05-14 14:43:29.309086 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309092 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309097 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309102 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309107 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309113 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309118 | orchestrator | 2025-05-14 14:43:29.309124 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.309129 | orchestrator | Wednesday 14 May 2025 14:33:55 +0000 (0:00:00.926) 0:03:22.129 ********* 2025-05-14 14:43:29.309135 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309140 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309145 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309151 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309156 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309161 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309166 | orchestrator | 2025-05-14 14:43:29.309172 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.309177 | orchestrator | Wednesday 14 May 2025 14:33:56 +0000 (0:00:00.629) 0:03:22.759 ********* 2025-05-14 14:43:29.309183 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309188 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309193 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309199 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309204 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309209 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309214 | orchestrator | 2025-05-14 14:43:29.309219 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.309236 | orchestrator | Wednesday 14 May 2025 14:33:57 +0000 (0:00:01.017) 0:03:23.776 ********* 2025-05-14 14:43:29.309245 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309253 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309280 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309291 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309298 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309303 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309309 | orchestrator | 2025-05-14 14:43:29.309314 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.309367 | orchestrator | Wednesday 14 May 2025 14:33:57 +0000 (0:00:00.700) 0:03:24.477 ********* 2025-05-14 14:43:29.309385 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309390 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309396 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309401 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309406 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309412 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309417 | orchestrator | 2025-05-14 14:43:29.309422 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.309428 | orchestrator | Wednesday 14 May 2025 14:33:59 +0000 (0:00:01.068) 0:03:25.545 ********* 2025-05-14 14:43:29.309434 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309439 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309444 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309450 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309455 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309460 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309468 | orchestrator | 2025-05-14 14:43:29.309479 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.309489 | orchestrator | Wednesday 14 May 2025 14:33:59 +0000 (0:00:00.631) 0:03:26.177 ********* 2025-05-14 14:43:29.309497 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309502 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309507 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309513 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.309518 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.309523 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.309529 | orchestrator | 2025-05-14 14:43:29.309534 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.309540 | orchestrator | Wednesday 14 May 2025 14:34:01 +0000 (0:00:02.159) 0:03:28.337 ********* 2025-05-14 14:43:29.309545 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309556 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309561 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.309566 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.309572 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.309577 | orchestrator | 2025-05-14 14:43:29.309582 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.309588 | orchestrator | Wednesday 14 May 2025 14:34:02 +0000 (0:00:00.564) 0:03:28.902 ********* 2025-05-14 14:43:29.309594 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.309599 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.309604 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309627 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.309633 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.309638 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309644 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.309649 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.309654 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309659 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.309665 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.309670 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309675 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.309680 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.309686 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309691 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.309696 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.309701 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309706 | orchestrator | 2025-05-14 14:43:29.309712 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.309727 | orchestrator | Wednesday 14 May 2025 14:34:03 +0000 (0:00:00.700) 0:03:29.602 ********* 2025-05-14 14:43:29.309733 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 14:43:29.309738 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 14:43:29.309743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309748 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 14:43:29.309754 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 14:43:29.309759 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309764 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 14:43:29.309769 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 14:43:29.309775 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309780 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-14 14:43:29.309785 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-14 14:43:29.309790 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-14 14:43:29.309796 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-14 14:43:29.309801 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-14 14:43:29.309806 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-14 14:43:29.309811 | orchestrator | 2025-05-14 14:43:29.309817 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.309822 | orchestrator | Wednesday 14 May 2025 14:34:03 +0000 (0:00:00.706) 0:03:30.308 ********* 2025-05-14 14:43:29.309827 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309838 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309843 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.309848 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.309854 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.309859 | orchestrator | 2025-05-14 14:43:29.309864 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.309870 | orchestrator | Wednesday 14 May 2025 14:34:04 +0000 (0:00:00.836) 0:03:31.145 ********* 2025-05-14 14:43:29.309875 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309926 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309943 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309949 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.309954 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.309959 | orchestrator | 2025-05-14 14:43:29.309965 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.309970 | orchestrator | Wednesday 14 May 2025 14:34:05 +0000 (0:00:00.624) 0:03:31.769 ********* 2025-05-14 14:43:29.309976 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.309981 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.309986 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.309991 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.309996 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310002 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310007 | orchestrator | 2025-05-14 14:43:29.310035 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.310048 | orchestrator | Wednesday 14 May 2025 14:34:06 +0000 (0:00:01.091) 0:03:32.861 ********* 2025-05-14 14:43:29.310058 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310063 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310069 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310074 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310079 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310084 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310095 | orchestrator | 2025-05-14 14:43:29.310101 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.310107 | orchestrator | Wednesday 14 May 2025 14:34:07 +0000 (0:00:00.744) 0:03:33.605 ********* 2025-05-14 14:43:29.310112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310117 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310123 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310128 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310134 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310139 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310144 | orchestrator | 2025-05-14 14:43:29.310150 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.310155 | orchestrator | Wednesday 14 May 2025 14:34:08 +0000 (0:00:01.099) 0:03:34.704 ********* 2025-05-14 14:43:29.310160 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310165 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310171 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310176 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.310182 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.310187 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.310192 | orchestrator | 2025-05-14 14:43:29.310198 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.310204 | orchestrator | Wednesday 14 May 2025 14:34:08 +0000 (0:00:00.814) 0:03:35.518 ********* 2025-05-14 14:43:29.310209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.310214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.310220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.310225 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310230 | orchestrator | 2025-05-14 14:43:29.310236 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.310241 | orchestrator | Wednesday 14 May 2025 14:34:09 +0000 (0:00:00.778) 0:03:36.297 ********* 2025-05-14 14:43:29.310246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.310252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.310260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.310269 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310278 | orchestrator | 2025-05-14 14:43:29.310287 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.310297 | orchestrator | Wednesday 14 May 2025 14:34:10 +0000 (0:00:00.951) 0:03:37.248 ********* 2025-05-14 14:43:29.310307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.310316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.310324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.310330 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310335 | orchestrator | 2025-05-14 14:43:29.310341 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.310346 | orchestrator | Wednesday 14 May 2025 14:34:11 +0000 (0:00:00.432) 0:03:37.680 ********* 2025-05-14 14:43:29.310351 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310357 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310376 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310382 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.310387 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.310393 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.310398 | orchestrator | 2025-05-14 14:43:29.310404 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.310409 | orchestrator | Wednesday 14 May 2025 14:34:11 +0000 (0:00:00.751) 0:03:38.432 ********* 2025-05-14 14:43:29.310415 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.310420 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310431 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.310437 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310442 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.310447 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310453 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 14:43:29.310458 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 14:43:29.310464 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 14:43:29.310469 | orchestrator | 2025-05-14 14:43:29.310475 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.310480 | orchestrator | Wednesday 14 May 2025 14:34:13 +0000 (0:00:01.459) 0:03:39.891 ********* 2025-05-14 14:43:29.310486 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310549 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310558 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310568 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310573 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310579 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310584 | orchestrator | 2025-05-14 14:43:29.310590 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.310595 | orchestrator | Wednesday 14 May 2025 14:34:14 +0000 (0:00:00.726) 0:03:40.617 ********* 2025-05-14 14:43:29.310600 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310619 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310626 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310631 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310637 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310642 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310648 | orchestrator | 2025-05-14 14:43:29.310657 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.310667 | orchestrator | Wednesday 14 May 2025 14:34:15 +0000 (0:00:01.086) 0:03:41.704 ********* 2025-05-14 14:43:29.310676 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.310684 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310690 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.310695 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310700 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.310706 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310711 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.310716 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310722 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.310727 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310733 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.310738 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310743 | orchestrator | 2025-05-14 14:43:29.310749 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.310754 | orchestrator | Wednesday 14 May 2025 14:34:16 +0000 (0:00:01.254) 0:03:42.958 ********* 2025-05-14 14:43:29.310759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310765 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310770 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310775 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.310781 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310786 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.310792 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310797 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.310802 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310813 | orchestrator | 2025-05-14 14:43:29.310819 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.310824 | orchestrator | Wednesday 14 May 2025 14:34:17 +0000 (0:00:01.233) 0:03:44.192 ********* 2025-05-14 14:43:29.310830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.310835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.310840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.310845 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.310851 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:43:29.310856 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:43:29.310861 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:43:29.310867 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.310872 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:43:29.310877 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:43:29.310882 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:43:29.310888 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.310893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.310898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.310903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.310908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.310914 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.310919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.310924 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.310929 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.310934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.310940 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.310945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.310951 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.310956 | orchestrator | 2025-05-14 14:43:29.310961 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.310966 | orchestrator | Wednesday 14 May 2025 14:34:19 +0000 (0:00:01.402) 0:03:45.594 ********* 2025-05-14 14:43:29.310972 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.310977 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.310982 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.310988 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.310993 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.310998 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.311003 | orchestrator | 2025-05-14 14:43:29.311055 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.311063 | orchestrator | Wednesday 14 May 2025 14:34:23 +0000 (0:00:04.614) 0:03:50.209 ********* 2025-05-14 14:43:29.311073 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.311078 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.311083 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.311088 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.311094 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.311099 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.311104 | orchestrator | 2025-05-14 14:43:29.311110 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 14:43:29.311115 | orchestrator | Wednesday 14 May 2025 14:34:24 +0000 (0:00:01.069) 0:03:51.279 ********* 2025-05-14 14:43:29.311120 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311125 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.311131 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.311142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.311148 | orchestrator | 2025-05-14 14:43:29.311158 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 14:43:29.311168 | orchestrator | Wednesday 14 May 2025 14:34:25 +0000 (0:00:01.158) 0:03:52.438 ********* 2025-05-14 14:43:29.311177 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.311183 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.311188 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.311193 | orchestrator | 2025-05-14 14:43:29.311199 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-14 14:43:29.311204 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.311209 | orchestrator | 2025-05-14 14:43:29.311215 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 14:43:29.311220 | orchestrator | Wednesday 14 May 2025 14:34:27 +0000 (0:00:01.323) 0:03:53.762 ********* 2025-05-14 14:43:29.311225 | orchestrator | 2025-05-14 14:43:29.311231 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-14 14:43:29.311236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.311241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.311247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.311252 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311257 | orchestrator | 2025-05-14 14:43:29.311263 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 14:43:29.311268 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.311273 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.311278 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.311284 | orchestrator | 2025-05-14 14:43:29.311289 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 14:43:29.311296 | orchestrator | Wednesday 14 May 2025 14:34:28 +0000 (0:00:01.428) 0:03:55.191 ********* 2025-05-14 14:43:29.311305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.311313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.311321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.311331 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311341 | orchestrator | 2025-05-14 14:43:29.311347 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 14:43:29.311353 | orchestrator | Wednesday 14 May 2025 14:34:29 +0000 (0:00:00.998) 0:03:56.189 ********* 2025-05-14 14:43:29.311358 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.311364 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.311369 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.311374 | orchestrator | 2025-05-14 14:43:29.311380 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-14 14:43:29.311385 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311390 | orchestrator | 2025-05-14 14:43:29.311396 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 14:43:29.311401 | orchestrator | Wednesday 14 May 2025 14:34:30 +0000 (0:00:00.744) 0:03:56.934 ********* 2025-05-14 14:43:29.311407 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311412 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.311417 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.311422 | orchestrator | 2025-05-14 14:43:29.311428 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-14 14:43:29.311433 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311439 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.311444 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.311449 | orchestrator | 2025-05-14 14:43:29.311455 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 14:43:29.311465 | orchestrator | Wednesday 14 May 2025 14:34:31 +0000 (0:00:00.712) 0:03:57.647 ********* 2025-05-14 14:43:29.311471 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.311481 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.311487 | orchestrator | 2025-05-14 14:43:29.311492 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-14 14:43:29.311497 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311503 | orchestrator | 2025-05-14 14:43:29.311508 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 14:43:29.311514 | orchestrator | Wednesday 14 May 2025 14:34:31 +0000 (0:00:00.579) 0:03:58.227 ********* 2025-05-14 14:43:29.311519 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311524 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.311529 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.311535 | orchestrator | 2025-05-14 14:43:29.311540 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-14 14:43:29.311546 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311551 | orchestrator | 2025-05-14 14:43:29.311556 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 14:43:29.311562 | orchestrator | Wednesday 14 May 2025 14:34:32 +0000 (0:00:01.042) 0:03:59.270 ********* 2025-05-14 14:43:29.311567 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311573 | orchestrator | 2025-05-14 14:43:29.311663 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 14:43:29.311679 | orchestrator | Wednesday 14 May 2025 14:34:32 +0000 (0:00:00.143) 0:03:59.413 ********* 2025-05-14 14:43:29.311685 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311690 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.311696 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.311702 | orchestrator | 2025-05-14 14:43:29.311707 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-14 14:43:29.311713 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311719 | orchestrator | 2025-05-14 14:43:29.311725 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 14:43:29.311730 | orchestrator | Wednesday 14 May 2025 14:34:33 +0000 (0:00:00.953) 0:04:00.366 ********* 2025-05-14 14:43:29.311736 | orchestrator | 2025-05-14 14:43:29.311742 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-14 14:43:29.311748 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.311767 | orchestrator | 2025-05-14 14:43:29.311777 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 14:43:29.311784 | orchestrator | Wednesday 14 May 2025 14:34:34 +0000 (0:00:00.830) 0:04:01.197 ********* 2025-05-14 14:43:29.311790 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.311795 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.311800 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.311806 | orchestrator | 2025-05-14 14:43:29.311811 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-14 14:43:29.311817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.311822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.311827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.311832 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311838 | orchestrator | 2025-05-14 14:43:29.311843 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 14:43:29.311849 | orchestrator | Wednesday 14 May 2025 14:34:35 +0000 (0:00:00.958) 0:04:02.155 ********* 2025-05-14 14:43:29.311854 | orchestrator | 2025-05-14 14:43:29.311859 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-14 14:43:29.311865 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311875 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.311881 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.311886 | orchestrator | 2025-05-14 14:43:29.311891 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 14:43:29.311897 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.311901 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.311906 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.311911 | orchestrator | 2025-05-14 14:43:29.311916 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 14:43:29.311920 | orchestrator | Wednesday 14 May 2025 14:34:37 +0000 (0:00:01.518) 0:04:03.674 ********* 2025-05-14 14:43:29.311925 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.311930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.311934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.311939 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.311944 | orchestrator | 2025-05-14 14:43:29.311949 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 14:43:29.311953 | orchestrator | Wednesday 14 May 2025 14:34:38 +0000 (0:00:00.992) 0:04:04.667 ********* 2025-05-14 14:43:29.311958 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.311963 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.311967 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.311972 | orchestrator | 2025-05-14 14:43:29.311977 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-14 14:43:29.311982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.311986 | orchestrator | 2025-05-14 14:43:29.311991 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 14:43:29.311996 | orchestrator | Wednesday 14 May 2025 14:34:39 +0000 (0:00:01.145) 0:04:05.812 ********* 2025-05-14 14:43:29.312001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.312005 | orchestrator | 2025-05-14 14:43:29.312010 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 14:43:29.312015 | orchestrator | Wednesday 14 May 2025 14:34:39 +0000 (0:00:00.490) 0:04:06.302 ********* 2025-05-14 14:43:29.312020 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312024 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312029 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312034 | orchestrator | 2025-05-14 14:43:29.312038 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-14 14:43:29.312043 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.312048 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.312052 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.312057 | orchestrator | 2025-05-14 14:43:29.312062 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 14:43:29.312067 | orchestrator | Wednesday 14 May 2025 14:34:40 +0000 (0:00:00.876) 0:04:07.179 ********* 2025-05-14 14:43:29.312071 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.312076 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.312081 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.312086 | orchestrator | 2025-05-14 14:43:29.312090 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.312095 | orchestrator | Wednesday 14 May 2025 14:34:41 +0000 (0:00:01.107) 0:04:08.287 ********* 2025-05-14 14:43:29.312100 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.312105 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.312109 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.312114 | orchestrator | 2025-05-14 14:43:29.312119 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-14 14:43:29.312167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.312174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.312187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.312192 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.312196 | orchestrator | 2025-05-14 14:43:29.312201 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 14:43:29.312206 | orchestrator | Wednesday 14 May 2025 14:34:43 +0000 (0:00:01.386) 0:04:09.674 ********* 2025-05-14 14:43:29.312210 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.312215 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.312220 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.312225 | orchestrator | 2025-05-14 14:43:29.312229 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 14:43:29.312234 | orchestrator | Wednesday 14 May 2025 14:34:43 +0000 (0:00:00.810) 0:04:10.484 ********* 2025-05-14 14:43:29.312239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.312244 | orchestrator | 2025-05-14 14:43:29.312248 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 14:43:29.312253 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.510) 0:04:10.995 ********* 2025-05-14 14:43:29.312258 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.312263 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.312268 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.312277 | orchestrator | 2025-05-14 14:43:29.312285 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 14:43:29.312295 | orchestrator | Wednesday 14 May 2025 14:34:44 +0000 (0:00:00.291) 0:04:11.287 ********* 2025-05-14 14:43:29.312300 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.312305 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.312310 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.312315 | orchestrator | 2025-05-14 14:43:29.312321 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 14:43:29.312329 | orchestrator | Wednesday 14 May 2025 14:34:46 +0000 (0:00:01.433) 0:04:12.720 ********* 2025-05-14 14:43:29.312337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.312345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.312353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.312361 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.312369 | orchestrator | 2025-05-14 14:43:29.312376 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 14:43:29.312384 | orchestrator | Wednesday 14 May 2025 14:34:46 +0000 (0:00:00.667) 0:04:13.388 ********* 2025-05-14 14:43:29.312392 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.312397 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.312402 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.312406 | orchestrator | 2025-05-14 14:43:29.312411 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 14:43:29.312416 | orchestrator | Wednesday 14 May 2025 14:34:47 +0000 (0:00:00.461) 0:04:13.849 ********* 2025-05-14 14:43:29.312420 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.312425 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.312430 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.312434 | orchestrator | 2025-05-14 14:43:29.312439 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 14:43:29.312444 | orchestrator | Wednesday 14 May 2025 14:34:47 +0000 (0:00:00.456) 0:04:14.305 ********* 2025-05-14 14:43:29.312448 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.312453 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.312458 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.312462 | orchestrator | 2025-05-14 14:43:29.312467 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 14:43:29.312486 | orchestrator | Wednesday 14 May 2025 14:34:48 +0000 (0:00:00.861) 0:04:15.166 ********* 2025-05-14 14:43:29.312500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.312505 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.312509 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.312514 | orchestrator | 2025-05-14 14:43:29.312519 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.312524 | orchestrator | Wednesday 14 May 2025 14:34:49 +0000 (0:00:00.473) 0:04:15.640 ********* 2025-05-14 14:43:29.312529 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.312533 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.312538 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.312543 | orchestrator | 2025-05-14 14:43:29.312548 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-14 14:43:29.312552 | orchestrator | 2025-05-14 14:43:29.312557 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.312562 | orchestrator | Wednesday 14 May 2025 14:34:51 +0000 (0:00:02.409) 0:04:18.049 ********* 2025-05-14 14:43:29.312567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.312572 | orchestrator | 2025-05-14 14:43:29.312577 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.312582 | orchestrator | Wednesday 14 May 2025 14:34:52 +0000 (0:00:00.665) 0:04:18.715 ********* 2025-05-14 14:43:29.312587 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.312591 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.312596 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.312601 | orchestrator | 2025-05-14 14:43:29.312620 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.312626 | orchestrator | Wednesday 14 May 2025 14:34:52 +0000 (0:00:00.723) 0:04:19.438 ********* 2025-05-14 14:43:29.312630 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312635 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312640 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312645 | orchestrator | 2025-05-14 14:43:29.312650 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.312702 | orchestrator | Wednesday 14 May 2025 14:34:53 +0000 (0:00:00.291) 0:04:19.729 ********* 2025-05-14 14:43:29.312709 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312714 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312722 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312727 | orchestrator | 2025-05-14 14:43:29.312732 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.312737 | orchestrator | Wednesday 14 May 2025 14:34:53 +0000 (0:00:00.435) 0:04:20.165 ********* 2025-05-14 14:43:29.312742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312746 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312751 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312756 | orchestrator | 2025-05-14 14:43:29.312760 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.312765 | orchestrator | Wednesday 14 May 2025 14:34:53 +0000 (0:00:00.280) 0:04:20.446 ********* 2025-05-14 14:43:29.312770 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.312774 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.312779 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.312784 | orchestrator | 2025-05-14 14:43:29.312789 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.312793 | orchestrator | Wednesday 14 May 2025 14:34:54 +0000 (0:00:00.728) 0:04:21.175 ********* 2025-05-14 14:43:29.312799 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312807 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312815 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312823 | orchestrator | 2025-05-14 14:43:29.312830 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.312835 | orchestrator | Wednesday 14 May 2025 14:34:54 +0000 (0:00:00.348) 0:04:21.524 ********* 2025-05-14 14:43:29.312847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312852 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312857 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312862 | orchestrator | 2025-05-14 14:43:29.312866 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.312871 | orchestrator | Wednesday 14 May 2025 14:34:55 +0000 (0:00:00.530) 0:04:22.054 ********* 2025-05-14 14:43:29.312876 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312881 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312885 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312890 | orchestrator | 2025-05-14 14:43:29.312895 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.312900 | orchestrator | Wednesday 14 May 2025 14:34:55 +0000 (0:00:00.370) 0:04:22.424 ********* 2025-05-14 14:43:29.312904 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312909 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312914 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312918 | orchestrator | 2025-05-14 14:43:29.312923 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.312928 | orchestrator | Wednesday 14 May 2025 14:34:56 +0000 (0:00:00.366) 0:04:22.791 ********* 2025-05-14 14:43:29.312933 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312937 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312942 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.312947 | orchestrator | 2025-05-14 14:43:29.312951 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.312956 | orchestrator | Wednesday 14 May 2025 14:34:56 +0000 (0:00:00.343) 0:04:23.134 ********* 2025-05-14 14:43:29.312961 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.312965 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.312970 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.312975 | orchestrator | 2025-05-14 14:43:29.312980 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.312984 | orchestrator | Wednesday 14 May 2025 14:34:57 +0000 (0:00:00.906) 0:04:24.041 ********* 2025-05-14 14:43:29.312989 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.312994 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.312999 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313003 | orchestrator | 2025-05-14 14:43:29.313008 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.313013 | orchestrator | Wednesday 14 May 2025 14:34:57 +0000 (0:00:00.296) 0:04:24.337 ********* 2025-05-14 14:43:29.313017 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.313022 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.313027 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.313031 | orchestrator | 2025-05-14 14:43:29.313036 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.313041 | orchestrator | Wednesday 14 May 2025 14:34:58 +0000 (0:00:00.299) 0:04:24.636 ********* 2025-05-14 14:43:29.313045 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313050 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313055 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313060 | orchestrator | 2025-05-14 14:43:29.313064 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.313069 | orchestrator | Wednesday 14 May 2025 14:34:58 +0000 (0:00:00.283) 0:04:24.920 ********* 2025-05-14 14:43:29.313074 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313078 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313083 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313088 | orchestrator | 2025-05-14 14:43:29.313092 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.313097 | orchestrator | Wednesday 14 May 2025 14:34:58 +0000 (0:00:00.566) 0:04:25.486 ********* 2025-05-14 14:43:29.313102 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313111 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313116 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313120 | orchestrator | 2025-05-14 14:43:29.313125 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.313130 | orchestrator | Wednesday 14 May 2025 14:34:59 +0000 (0:00:00.328) 0:04:25.814 ********* 2025-05-14 14:43:29.313135 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313161 | orchestrator | 2025-05-14 14:43:29.313166 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.313213 | orchestrator | Wednesday 14 May 2025 14:34:59 +0000 (0:00:00.321) 0:04:26.136 ********* 2025-05-14 14:43:29.313220 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313225 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313233 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313238 | orchestrator | 2025-05-14 14:43:29.313242 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.313247 | orchestrator | Wednesday 14 May 2025 14:34:59 +0000 (0:00:00.355) 0:04:26.491 ********* 2025-05-14 14:43:29.313252 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.313257 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.313261 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.313266 | orchestrator | 2025-05-14 14:43:29.313271 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.313276 | orchestrator | Wednesday 14 May 2025 14:35:00 +0000 (0:00:00.601) 0:04:27.093 ********* 2025-05-14 14:43:29.313280 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.313285 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.313290 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.313296 | orchestrator | 2025-05-14 14:43:29.313305 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.313314 | orchestrator | Wednesday 14 May 2025 14:35:00 +0000 (0:00:00.371) 0:04:27.464 ********* 2025-05-14 14:43:29.313321 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313326 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313331 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313336 | orchestrator | 2025-05-14 14:43:29.313341 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.313347 | orchestrator | Wednesday 14 May 2025 14:35:01 +0000 (0:00:00.385) 0:04:27.850 ********* 2025-05-14 14:43:29.313354 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313362 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313369 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313377 | orchestrator | 2025-05-14 14:43:29.313386 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.313393 | orchestrator | Wednesday 14 May 2025 14:35:01 +0000 (0:00:00.357) 0:04:28.207 ********* 2025-05-14 14:43:29.313398 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313403 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313407 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313412 | orchestrator | 2025-05-14 14:43:29.313417 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.313421 | orchestrator | Wednesday 14 May 2025 14:35:02 +0000 (0:00:00.662) 0:04:28.869 ********* 2025-05-14 14:43:29.313426 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313431 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313435 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313440 | orchestrator | 2025-05-14 14:43:29.313445 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.313450 | orchestrator | Wednesday 14 May 2025 14:35:02 +0000 (0:00:00.373) 0:04:29.243 ********* 2025-05-14 14:43:29.313454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313470 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313475 | orchestrator | 2025-05-14 14:43:29.313479 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.313484 | orchestrator | Wednesday 14 May 2025 14:35:03 +0000 (0:00:00.358) 0:04:29.602 ********* 2025-05-14 14:43:29.313489 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313493 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313498 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313503 | orchestrator | 2025-05-14 14:43:29.313507 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.313512 | orchestrator | Wednesday 14 May 2025 14:35:03 +0000 (0:00:00.340) 0:04:29.942 ********* 2025-05-14 14:43:29.313516 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313521 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313526 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313530 | orchestrator | 2025-05-14 14:43:29.313535 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.313540 | orchestrator | Wednesday 14 May 2025 14:35:04 +0000 (0:00:00.783) 0:04:30.726 ********* 2025-05-14 14:43:29.313545 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313549 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313554 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313559 | orchestrator | 2025-05-14 14:43:29.313564 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.313569 | orchestrator | Wednesday 14 May 2025 14:35:04 +0000 (0:00:00.344) 0:04:31.070 ********* 2025-05-14 14:43:29.313573 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313578 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313583 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313587 | orchestrator | 2025-05-14 14:43:29.313592 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.313597 | orchestrator | Wednesday 14 May 2025 14:35:04 +0000 (0:00:00.377) 0:04:31.448 ********* 2025-05-14 14:43:29.313602 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313622 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313628 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313633 | orchestrator | 2025-05-14 14:43:29.313637 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.313643 | orchestrator | Wednesday 14 May 2025 14:35:05 +0000 (0:00:00.379) 0:04:31.828 ********* 2025-05-14 14:43:29.313647 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313652 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313657 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313661 | orchestrator | 2025-05-14 14:43:29.313666 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.313671 | orchestrator | Wednesday 14 May 2025 14:35:05 +0000 (0:00:00.663) 0:04:32.491 ********* 2025-05-14 14:43:29.313676 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313680 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313685 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313690 | orchestrator | 2025-05-14 14:43:29.313739 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.313750 | orchestrator | Wednesday 14 May 2025 14:35:06 +0000 (0:00:00.360) 0:04:32.851 ********* 2025-05-14 14:43:29.313755 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.313760 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.313765 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313769 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.313774 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.313780 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313787 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.313801 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.313808 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313816 | orchestrator | 2025-05-14 14:43:29.313826 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.313833 | orchestrator | Wednesday 14 May 2025 14:35:06 +0000 (0:00:00.438) 0:04:33.289 ********* 2025-05-14 14:43:29.313840 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 14:43:29.313847 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 14:43:29.313854 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313862 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 14:43:29.313870 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 14:43:29.313878 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313886 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 14:43:29.313893 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 14:43:29.313899 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313907 | orchestrator | 2025-05-14 14:43:29.313914 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.313921 | orchestrator | Wednesday 14 May 2025 14:35:07 +0000 (0:00:00.812) 0:04:34.102 ********* 2025-05-14 14:43:29.313928 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313934 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313942 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313948 | orchestrator | 2025-05-14 14:43:29.313955 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.313962 | orchestrator | Wednesday 14 May 2025 14:35:07 +0000 (0:00:00.358) 0:04:34.461 ********* 2025-05-14 14:43:29.313969 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.313976 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.313983 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.313990 | orchestrator | 2025-05-14 14:43:29.313997 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.314004 | orchestrator | Wednesday 14 May 2025 14:35:08 +0000 (0:00:00.399) 0:04:34.860 ********* 2025-05-14 14:43:29.314012 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314068 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314075 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314083 | orchestrator | 2025-05-14 14:43:29.314089 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.314096 | orchestrator | Wednesday 14 May 2025 14:35:08 +0000 (0:00:00.422) 0:04:35.283 ********* 2025-05-14 14:43:29.314103 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314111 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314118 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314125 | orchestrator | 2025-05-14 14:43:29.314151 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.314158 | orchestrator | Wednesday 14 May 2025 14:35:09 +0000 (0:00:00.574) 0:04:35.858 ********* 2025-05-14 14:43:29.314165 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314172 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314180 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314187 | orchestrator | 2025-05-14 14:43:29.314194 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.314201 | orchestrator | Wednesday 14 May 2025 14:35:09 +0000 (0:00:00.284) 0:04:36.142 ********* 2025-05-14 14:43:29.314208 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314217 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314224 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314232 | orchestrator | 2025-05-14 14:43:29.314239 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.314258 | orchestrator | Wednesday 14 May 2025 14:35:09 +0000 (0:00:00.306) 0:04:36.448 ********* 2025-05-14 14:43:29.314263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.314268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.314273 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.314278 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314282 | orchestrator | 2025-05-14 14:43:29.314287 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.314292 | orchestrator | Wednesday 14 May 2025 14:35:10 +0000 (0:00:00.418) 0:04:36.867 ********* 2025-05-14 14:43:29.314297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.314301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.314306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.314311 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314315 | orchestrator | 2025-05-14 14:43:29.314320 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.314325 | orchestrator | Wednesday 14 May 2025 14:35:10 +0000 (0:00:00.379) 0:04:37.247 ********* 2025-05-14 14:43:29.314330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.314334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.314339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.314437 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314448 | orchestrator | 2025-05-14 14:43:29.314459 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.314465 | orchestrator | Wednesday 14 May 2025 14:35:11 +0000 (0:00:00.368) 0:04:37.615 ********* 2025-05-14 14:43:29.314471 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314481 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314486 | orchestrator | 2025-05-14 14:43:29.314492 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.314497 | orchestrator | Wednesday 14 May 2025 14:35:11 +0000 (0:00:00.512) 0:04:38.127 ********* 2025-05-14 14:43:29.314502 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.314507 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314513 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.314518 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314523 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.314528 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314533 | orchestrator | 2025-05-14 14:43:29.314538 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.314544 | orchestrator | Wednesday 14 May 2025 14:35:12 +0000 (0:00:00.452) 0:04:38.580 ********* 2025-05-14 14:43:29.314549 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314554 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314561 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314569 | orchestrator | 2025-05-14 14:43:29.314576 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.314584 | orchestrator | Wednesday 14 May 2025 14:35:12 +0000 (0:00:00.332) 0:04:38.912 ********* 2025-05-14 14:43:29.314591 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314599 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314624 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314630 | orchestrator | 2025-05-14 14:43:29.314635 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.314640 | orchestrator | Wednesday 14 May 2025 14:35:12 +0000 (0:00:00.305) 0:04:39.218 ********* 2025-05-14 14:43:29.314645 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.314650 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314661 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.314665 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314670 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.314675 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314680 | orchestrator | 2025-05-14 14:43:29.314684 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.314689 | orchestrator | Wednesday 14 May 2025 14:35:13 +0000 (0:00:00.627) 0:04:39.845 ********* 2025-05-14 14:43:29.314694 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314699 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314704 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314708 | orchestrator | 2025-05-14 14:43:29.314713 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.314718 | orchestrator | Wednesday 14 May 2025 14:35:13 +0000 (0:00:00.257) 0:04:40.103 ********* 2025-05-14 14:43:29.314723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.314727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.314732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.314737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:43:29.314742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:43:29.314746 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:43:29.314751 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314756 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:43:29.314765 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:43:29.314770 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:43:29.314775 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314780 | orchestrator | 2025-05-14 14:43:29.314785 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.314789 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:00.476) 0:04:40.579 ********* 2025-05-14 14:43:29.314794 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314799 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314804 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314808 | orchestrator | 2025-05-14 14:43:29.314813 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.314818 | orchestrator | Wednesday 14 May 2025 14:35:14 +0000 (0:00:00.568) 0:04:41.147 ********* 2025-05-14 14:43:29.314823 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314828 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314832 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314837 | orchestrator | 2025-05-14 14:43:29.314842 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.314846 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:00.514) 0:04:41.662 ********* 2025-05-14 14:43:29.314851 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314856 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314861 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314866 | orchestrator | 2025-05-14 14:43:29.314870 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.314875 | orchestrator | Wednesday 14 May 2025 14:35:15 +0000 (0:00:00.657) 0:04:42.319 ********* 2025-05-14 14:43:29.314880 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314885 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.314889 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.314894 | orchestrator | 2025-05-14 14:43:29.314899 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-14 14:43:29.314926 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.531) 0:04:42.850 ********* 2025-05-14 14:43:29.314935 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.314944 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.314949 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.314953 | orchestrator | 2025-05-14 14:43:29.314958 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-14 14:43:29.314963 | orchestrator | Wednesday 14 May 2025 14:35:16 +0000 (0:00:00.467) 0:04:43.318 ********* 2025-05-14 14:43:29.314968 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.314973 | orchestrator | 2025-05-14 14:43:29.314978 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-14 14:43:29.314982 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:00.533) 0:04:43.851 ********* 2025-05-14 14:43:29.314987 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.314992 | orchestrator | 2025-05-14 14:43:29.314997 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-14 14:43:29.315001 | orchestrator | Wednesday 14 May 2025 14:35:17 +0000 (0:00:00.137) 0:04:43.988 ********* 2025-05-14 14:43:29.315006 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-14 14:43:29.315011 | orchestrator | 2025-05-14 14:43:29.315016 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-14 14:43:29.315020 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.648) 0:04:44.637 ********* 2025-05-14 14:43:29.315025 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315030 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315035 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315040 | orchestrator | 2025-05-14 14:43:29.315044 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-14 14:43:29.315049 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.519) 0:04:45.156 ********* 2025-05-14 14:43:29.315054 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315059 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315063 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315068 | orchestrator | 2025-05-14 14:43:29.315073 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-14 14:43:29.315078 | orchestrator | Wednesday 14 May 2025 14:35:18 +0000 (0:00:00.324) 0:04:45.481 ********* 2025-05-14 14:43:29.315083 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315087 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315092 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315097 | orchestrator | 2025-05-14 14:43:29.315102 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-14 14:43:29.315106 | orchestrator | Wednesday 14 May 2025 14:35:20 +0000 (0:00:01.176) 0:04:46.658 ********* 2025-05-14 14:43:29.315111 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315116 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315120 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315125 | orchestrator | 2025-05-14 14:43:29.315130 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-14 14:43:29.315135 | orchestrator | Wednesday 14 May 2025 14:35:20 +0000 (0:00:00.814) 0:04:47.472 ********* 2025-05-14 14:43:29.315140 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315144 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315149 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315154 | orchestrator | 2025-05-14 14:43:29.315159 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-14 14:43:29.315163 | orchestrator | Wednesday 14 May 2025 14:35:21 +0000 (0:00:00.865) 0:04:48.338 ********* 2025-05-14 14:43:29.315168 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315173 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315178 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315182 | orchestrator | 2025-05-14 14:43:29.315187 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-14 14:43:29.315192 | orchestrator | Wednesday 14 May 2025 14:35:22 +0000 (0:00:00.653) 0:04:48.992 ********* 2025-05-14 14:43:29.315200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315205 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315210 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315215 | orchestrator | 2025-05-14 14:43:29.315219 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-14 14:43:29.315224 | orchestrator | Wednesday 14 May 2025 14:35:22 +0000 (0:00:00.287) 0:04:49.279 ********* 2025-05-14 14:43:29.315229 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315234 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315238 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315243 | orchestrator | 2025-05-14 14:43:29.315248 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-14 14:43:29.315253 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:00.437) 0:04:49.716 ********* 2025-05-14 14:43:29.315257 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315262 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315267 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315272 | orchestrator | 2025-05-14 14:43:29.315277 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-14 14:43:29.315281 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:00.241) 0:04:49.958 ********* 2025-05-14 14:43:29.315286 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315291 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315296 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315300 | orchestrator | 2025-05-14 14:43:29.315305 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-14 14:43:29.315310 | orchestrator | Wednesday 14 May 2025 14:35:23 +0000 (0:00:00.276) 0:04:50.234 ********* 2025-05-14 14:43:29.315315 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315320 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315324 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315329 | orchestrator | 2025-05-14 14:43:29.315334 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-14 14:43:29.315339 | orchestrator | Wednesday 14 May 2025 14:35:24 +0000 (0:00:01.201) 0:04:51.436 ********* 2025-05-14 14:43:29.315343 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315348 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315353 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315358 | orchestrator | 2025-05-14 14:43:29.315378 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-14 14:43:29.315386 | orchestrator | Wednesday 14 May 2025 14:35:25 +0000 (0:00:00.418) 0:04:51.855 ********* 2025-05-14 14:43:29.315392 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.315396 | orchestrator | 2025-05-14 14:43:29.315401 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-14 14:43:29.315409 | orchestrator | Wednesday 14 May 2025 14:35:25 +0000 (0:00:00.488) 0:04:52.343 ********* 2025-05-14 14:43:29.315418 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315427 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315435 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315444 | orchestrator | 2025-05-14 14:43:29.315454 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-14 14:43:29.315460 | orchestrator | Wednesday 14 May 2025 14:35:26 +0000 (0:00:00.278) 0:04:52.622 ********* 2025-05-14 14:43:29.315465 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315470 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315474 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315479 | orchestrator | 2025-05-14 14:43:29.315484 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-14 14:43:29.315489 | orchestrator | Wednesday 14 May 2025 14:35:26 +0000 (0:00:00.436) 0:04:53.059 ********* 2025-05-14 14:43:29.315494 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.315503 | orchestrator | 2025-05-14 14:43:29.315507 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-14 14:43:29.315512 | orchestrator | Wednesday 14 May 2025 14:35:27 +0000 (0:00:00.524) 0:04:53.583 ********* 2025-05-14 14:43:29.315517 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315521 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315526 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315531 | orchestrator | 2025-05-14 14:43:29.315535 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-14 14:43:29.315540 | orchestrator | Wednesday 14 May 2025 14:35:28 +0000 (0:00:01.097) 0:04:54.680 ********* 2025-05-14 14:43:29.315545 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315550 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315554 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315559 | orchestrator | 2025-05-14 14:43:29.315564 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-14 14:43:29.315568 | orchestrator | Wednesday 14 May 2025 14:35:29 +0000 (0:00:01.411) 0:04:56.092 ********* 2025-05-14 14:43:29.315573 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315578 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315582 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315587 | orchestrator | 2025-05-14 14:43:29.315592 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-14 14:43:29.315597 | orchestrator | Wednesday 14 May 2025 14:35:31 +0000 (0:00:01.691) 0:04:57.783 ********* 2025-05-14 14:43:29.315601 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315643 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315650 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315655 | orchestrator | 2025-05-14 14:43:29.315660 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-14 14:43:29.315665 | orchestrator | Wednesday 14 May 2025 14:35:33 +0000 (0:00:02.220) 0:05:00.004 ********* 2025-05-14 14:43:29.315669 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.315674 | orchestrator | 2025-05-14 14:43:29.315679 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-14 14:43:29.315684 | orchestrator | Wednesday 14 May 2025 14:35:34 +0000 (0:00:00.895) 0:05:00.900 ********* 2025-05-14 14:43:29.315688 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-14 14:43:29.315693 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315698 | orchestrator | 2025-05-14 14:43:29.315703 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-14 14:43:29.315707 | orchestrator | Wednesday 14 May 2025 14:35:56 +0000 (0:00:21.668) 0:05:22.568 ********* 2025-05-14 14:43:29.315712 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315717 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315722 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315726 | orchestrator | 2025-05-14 14:43:29.315731 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-14 14:43:29.315736 | orchestrator | Wednesday 14 May 2025 14:36:03 +0000 (0:00:07.197) 0:05:29.766 ********* 2025-05-14 14:43:29.315740 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315745 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.315750 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.315755 | orchestrator | 2025-05-14 14:43:29.315760 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.315764 | orchestrator | Wednesday 14 May 2025 14:36:04 +0000 (0:00:01.198) 0:05:30.964 ********* 2025-05-14 14:43:29.315769 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315774 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315778 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315783 | orchestrator | 2025-05-14 14:43:29.315787 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 14:43:29.315795 | orchestrator | Wednesday 14 May 2025 14:36:05 +0000 (0:00:00.728) 0:05:31.692 ********* 2025-05-14 14:43:29.315800 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.315804 | orchestrator | 2025-05-14 14:43:29.315809 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 14:43:29.315813 | orchestrator | Wednesday 14 May 2025 14:36:06 +0000 (0:00:00.884) 0:05:32.577 ********* 2025-05-14 14:43:29.315817 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315822 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315845 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315851 | orchestrator | 2025-05-14 14:43:29.315855 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 14:43:29.315866 | orchestrator | Wednesday 14 May 2025 14:36:06 +0000 (0:00:00.355) 0:05:32.933 ********* 2025-05-14 14:43:29.315871 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315875 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315879 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315884 | orchestrator | 2025-05-14 14:43:29.315888 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 14:43:29.315893 | orchestrator | Wednesday 14 May 2025 14:36:07 +0000 (0:00:01.187) 0:05:34.121 ********* 2025-05-14 14:43:29.315897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.315901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.315906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.315910 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.315915 | orchestrator | 2025-05-14 14:43:29.315919 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 14:43:29.315924 | orchestrator | Wednesday 14 May 2025 14:36:08 +0000 (0:00:01.215) 0:05:35.336 ********* 2025-05-14 14:43:29.315928 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.315933 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.315937 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.315941 | orchestrator | 2025-05-14 14:43:29.315946 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.315950 | orchestrator | Wednesday 14 May 2025 14:36:09 +0000 (0:00:00.340) 0:05:35.677 ********* 2025-05-14 14:43:29.315955 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.315959 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.315964 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.315968 | orchestrator | 2025-05-14 14:43:29.315973 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-14 14:43:29.315977 | orchestrator | 2025-05-14 14:43:29.315981 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.315986 | orchestrator | Wednesday 14 May 2025 14:36:11 +0000 (0:00:02.211) 0:05:37.889 ********* 2025-05-14 14:43:29.315990 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.315995 | orchestrator | 2025-05-14 14:43:29.315999 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.316004 | orchestrator | Wednesday 14 May 2025 14:36:12 +0000 (0:00:00.844) 0:05:38.733 ********* 2025-05-14 14:43:29.316008 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316013 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316017 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316022 | orchestrator | 2025-05-14 14:43:29.316026 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.316031 | orchestrator | Wednesday 14 May 2025 14:36:12 +0000 (0:00:00.734) 0:05:39.467 ********* 2025-05-14 14:43:29.316035 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316040 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316044 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316052 | orchestrator | 2025-05-14 14:43:29.316057 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.316061 | orchestrator | Wednesday 14 May 2025 14:36:13 +0000 (0:00:00.327) 0:05:39.794 ********* 2025-05-14 14:43:29.316066 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316070 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316075 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316079 | orchestrator | 2025-05-14 14:43:29.316084 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.316088 | orchestrator | Wednesday 14 May 2025 14:36:13 +0000 (0:00:00.579) 0:05:40.374 ********* 2025-05-14 14:43:29.316093 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316097 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316101 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316106 | orchestrator | 2025-05-14 14:43:29.316110 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.316115 | orchestrator | Wednesday 14 May 2025 14:36:14 +0000 (0:00:00.384) 0:05:40.758 ********* 2025-05-14 14:43:29.316119 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316124 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316128 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316133 | orchestrator | 2025-05-14 14:43:29.316137 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.316142 | orchestrator | Wednesday 14 May 2025 14:36:14 +0000 (0:00:00.726) 0:05:41.485 ********* 2025-05-14 14:43:29.316146 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316155 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316159 | orchestrator | 2025-05-14 14:43:29.316164 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.316168 | orchestrator | Wednesday 14 May 2025 14:36:15 +0000 (0:00:00.322) 0:05:41.807 ********* 2025-05-14 14:43:29.316173 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316177 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316181 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316186 | orchestrator | 2025-05-14 14:43:29.316191 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.316195 | orchestrator | Wednesday 14 May 2025 14:36:15 +0000 (0:00:00.635) 0:05:42.443 ********* 2025-05-14 14:43:29.316200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316204 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316213 | orchestrator | 2025-05-14 14:43:29.316217 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.316222 | orchestrator | Wednesday 14 May 2025 14:36:16 +0000 (0:00:00.373) 0:05:42.817 ********* 2025-05-14 14:43:29.316227 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316231 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316235 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316240 | orchestrator | 2025-05-14 14:43:29.316259 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.316267 | orchestrator | Wednesday 14 May 2025 14:36:16 +0000 (0:00:00.360) 0:05:43.178 ********* 2025-05-14 14:43:29.316272 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316277 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316281 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316285 | orchestrator | 2025-05-14 14:43:29.316290 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.316294 | orchestrator | Wednesday 14 May 2025 14:36:17 +0000 (0:00:00.354) 0:05:43.533 ********* 2025-05-14 14:43:29.316299 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316303 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316308 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316312 | orchestrator | 2025-05-14 14:43:29.316320 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.316325 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:01.104) 0:05:44.637 ********* 2025-05-14 14:43:29.316329 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316334 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316338 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316343 | orchestrator | 2025-05-14 14:43:29.316347 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.316352 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:00.414) 0:05:45.051 ********* 2025-05-14 14:43:29.316356 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316361 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316365 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316370 | orchestrator | 2025-05-14 14:43:29.316374 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.316379 | orchestrator | Wednesday 14 May 2025 14:36:18 +0000 (0:00:00.347) 0:05:45.399 ********* 2025-05-14 14:43:29.316383 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316388 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316392 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316397 | orchestrator | 2025-05-14 14:43:29.316401 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.316406 | orchestrator | Wednesday 14 May 2025 14:36:19 +0000 (0:00:00.330) 0:05:45.729 ********* 2025-05-14 14:43:29.316410 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316423 | orchestrator | 2025-05-14 14:43:29.316428 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.316433 | orchestrator | Wednesday 14 May 2025 14:36:19 +0000 (0:00:00.602) 0:05:46.331 ********* 2025-05-14 14:43:29.316441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316448 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316456 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316464 | orchestrator | 2025-05-14 14:43:29.316472 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.316478 | orchestrator | Wednesday 14 May 2025 14:36:20 +0000 (0:00:00.366) 0:05:46.698 ********* 2025-05-14 14:43:29.316487 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316492 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316501 | orchestrator | 2025-05-14 14:43:29.316505 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.316509 | orchestrator | Wednesday 14 May 2025 14:36:20 +0000 (0:00:00.367) 0:05:47.066 ********* 2025-05-14 14:43:29.316514 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316518 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316523 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316527 | orchestrator | 2025-05-14 14:43:29.316531 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.316536 | orchestrator | Wednesday 14 May 2025 14:36:20 +0000 (0:00:00.314) 0:05:47.380 ********* 2025-05-14 14:43:29.316540 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316545 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316549 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316553 | orchestrator | 2025-05-14 14:43:29.316558 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.316562 | orchestrator | Wednesday 14 May 2025 14:36:21 +0000 (0:00:00.871) 0:05:48.252 ********* 2025-05-14 14:43:29.316567 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.316571 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.316575 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.316580 | orchestrator | 2025-05-14 14:43:29.316584 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.316593 | orchestrator | Wednesday 14 May 2025 14:36:22 +0000 (0:00:00.436) 0:05:48.689 ********* 2025-05-14 14:43:29.316597 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316601 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316621 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316629 | orchestrator | 2025-05-14 14:43:29.316634 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.316638 | orchestrator | Wednesday 14 May 2025 14:36:22 +0000 (0:00:00.435) 0:05:49.124 ********* 2025-05-14 14:43:29.316643 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316647 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316652 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316656 | orchestrator | 2025-05-14 14:43:29.316660 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.316665 | orchestrator | Wednesday 14 May 2025 14:36:22 +0000 (0:00:00.378) 0:05:49.503 ********* 2025-05-14 14:43:29.316669 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316674 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316678 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316683 | orchestrator | 2025-05-14 14:43:29.316687 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.316692 | orchestrator | Wednesday 14 May 2025 14:36:23 +0000 (0:00:00.734) 0:05:50.238 ********* 2025-05-14 14:43:29.316696 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316701 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316705 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316709 | orchestrator | 2025-05-14 14:43:29.316730 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.316739 | orchestrator | Wednesday 14 May 2025 14:36:24 +0000 (0:00:00.379) 0:05:50.617 ********* 2025-05-14 14:43:29.316744 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316748 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316757 | orchestrator | 2025-05-14 14:43:29.316761 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.316766 | orchestrator | Wednesday 14 May 2025 14:36:24 +0000 (0:00:00.347) 0:05:50.965 ********* 2025-05-14 14:43:29.316770 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316775 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316779 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316784 | orchestrator | 2025-05-14 14:43:29.316788 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.316792 | orchestrator | Wednesday 14 May 2025 14:36:24 +0000 (0:00:00.289) 0:05:51.254 ********* 2025-05-14 14:43:29.316797 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316801 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316810 | orchestrator | 2025-05-14 14:43:29.316815 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.316819 | orchestrator | Wednesday 14 May 2025 14:36:25 +0000 (0:00:00.526) 0:05:51.780 ********* 2025-05-14 14:43:29.316824 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316828 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316833 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316837 | orchestrator | 2025-05-14 14:43:29.316842 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.316846 | orchestrator | Wednesday 14 May 2025 14:36:25 +0000 (0:00:00.318) 0:05:52.099 ********* 2025-05-14 14:43:29.316851 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316855 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316860 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316864 | orchestrator | 2025-05-14 14:43:29.316869 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.316880 | orchestrator | Wednesday 14 May 2025 14:36:25 +0000 (0:00:00.330) 0:05:52.429 ********* 2025-05-14 14:43:29.316884 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316889 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316893 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316898 | orchestrator | 2025-05-14 14:43:29.316902 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.316907 | orchestrator | Wednesday 14 May 2025 14:36:26 +0000 (0:00:00.348) 0:05:52.778 ********* 2025-05-14 14:43:29.316911 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316916 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316920 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316924 | orchestrator | 2025-05-14 14:43:29.316929 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.316933 | orchestrator | Wednesday 14 May 2025 14:36:26 +0000 (0:00:00.466) 0:05:53.245 ********* 2025-05-14 14:43:29.316938 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316942 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316947 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.316951 | orchestrator | 2025-05-14 14:43:29.316955 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.316960 | orchestrator | Wednesday 14 May 2025 14:36:27 +0000 (0:00:00.334) 0:05:53.579 ********* 2025-05-14 14:43:29.316965 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.316969 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.316973 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.316978 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.316982 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.316987 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.316991 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.316996 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.317000 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317005 | orchestrator | 2025-05-14 14:43:29.317009 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.317013 | orchestrator | Wednesday 14 May 2025 14:36:27 +0000 (0:00:00.329) 0:05:53.909 ********* 2025-05-14 14:43:29.317018 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 14:43:29.317022 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 14:43:29.317027 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317031 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 14:43:29.317036 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 14:43:29.317040 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317045 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 14:43:29.317049 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 14:43:29.317054 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317058 | orchestrator | 2025-05-14 14:43:29.317063 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.317067 | orchestrator | Wednesday 14 May 2025 14:36:27 +0000 (0:00:00.331) 0:05:54.240 ********* 2025-05-14 14:43:29.317072 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317076 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317080 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317085 | orchestrator | 2025-05-14 14:43:29.317089 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.317094 | orchestrator | Wednesday 14 May 2025 14:36:28 +0000 (0:00:00.484) 0:05:54.725 ********* 2025-05-14 14:43:29.317098 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317103 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317107 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317115 | orchestrator | 2025-05-14 14:43:29.317137 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.317143 | orchestrator | Wednesday 14 May 2025 14:36:28 +0000 (0:00:00.295) 0:05:55.020 ********* 2025-05-14 14:43:29.317147 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317152 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317161 | orchestrator | 2025-05-14 14:43:29.317165 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.317169 | orchestrator | Wednesday 14 May 2025 14:36:28 +0000 (0:00:00.300) 0:05:55.321 ********* 2025-05-14 14:43:29.317174 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317178 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317183 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317187 | orchestrator | 2025-05-14 14:43:29.317192 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.317196 | orchestrator | Wednesday 14 May 2025 14:36:29 +0000 (0:00:00.290) 0:05:55.612 ********* 2025-05-14 14:43:29.317200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317205 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317214 | orchestrator | 2025-05-14 14:43:29.317218 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.317223 | orchestrator | Wednesday 14 May 2025 14:36:29 +0000 (0:00:00.461) 0:05:56.073 ********* 2025-05-14 14:43:29.317227 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317232 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317236 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317240 | orchestrator | 2025-05-14 14:43:29.317245 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.317249 | orchestrator | Wednesday 14 May 2025 14:36:29 +0000 (0:00:00.305) 0:05:56.379 ********* 2025-05-14 14:43:29.317254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.317258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.317262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.317267 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317271 | orchestrator | 2025-05-14 14:43:29.317276 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.317280 | orchestrator | Wednesday 14 May 2025 14:36:30 +0000 (0:00:00.381) 0:05:56.761 ********* 2025-05-14 14:43:29.317284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.317289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.317293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.317298 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317302 | orchestrator | 2025-05-14 14:43:29.317307 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.317311 | orchestrator | Wednesday 14 May 2025 14:36:30 +0000 (0:00:00.377) 0:05:57.138 ********* 2025-05-14 14:43:29.317315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.317320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.317324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.317329 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317333 | orchestrator | 2025-05-14 14:43:29.317338 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.317342 | orchestrator | Wednesday 14 May 2025 14:36:31 +0000 (0:00:00.387) 0:05:57.526 ********* 2025-05-14 14:43:29.317346 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317351 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317355 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317364 | orchestrator | 2025-05-14 14:43:29.317369 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.317373 | orchestrator | Wednesday 14 May 2025 14:36:31 +0000 (0:00:00.431) 0:05:57.957 ********* 2025-05-14 14:43:29.317378 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.317382 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317386 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.317391 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317395 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.317400 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317404 | orchestrator | 2025-05-14 14:43:29.317409 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.317413 | orchestrator | Wednesday 14 May 2025 14:36:31 +0000 (0:00:00.404) 0:05:58.362 ********* 2025-05-14 14:43:29.317417 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317422 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317426 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317431 | orchestrator | 2025-05-14 14:43:29.317435 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.317440 | orchestrator | Wednesday 14 May 2025 14:36:32 +0000 (0:00:00.296) 0:05:58.659 ********* 2025-05-14 14:43:29.317444 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317449 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317456 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317464 | orchestrator | 2025-05-14 14:43:29.317471 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.317477 | orchestrator | Wednesday 14 May 2025 14:36:32 +0000 (0:00:00.307) 0:05:58.967 ********* 2025-05-14 14:43:29.317485 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.317493 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317501 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.317508 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317517 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.317522 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317526 | orchestrator | 2025-05-14 14:43:29.317531 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.317554 | orchestrator | Wednesday 14 May 2025 14:36:33 +0000 (0:00:00.737) 0:05:59.705 ********* 2025-05-14 14:43:29.317562 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317567 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317571 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317576 | orchestrator | 2025-05-14 14:43:29.317580 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.317585 | orchestrator | Wednesday 14 May 2025 14:36:33 +0000 (0:00:00.291) 0:05:59.996 ********* 2025-05-14 14:43:29.317589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.317594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.317598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.317603 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:43:29.317627 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:43:29.317632 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:43:29.317636 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317641 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:43:29.317645 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:43:29.317650 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:43:29.317654 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317659 | orchestrator | 2025-05-14 14:43:29.317668 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.317673 | orchestrator | Wednesday 14 May 2025 14:36:34 +0000 (0:00:00.570) 0:06:00.567 ********* 2025-05-14 14:43:29.317677 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317682 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317686 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317691 | orchestrator | 2025-05-14 14:43:29.317695 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.317700 | orchestrator | Wednesday 14 May 2025 14:36:34 +0000 (0:00:00.659) 0:06:01.227 ********* 2025-05-14 14:43:29.317704 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317709 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317713 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317718 | orchestrator | 2025-05-14 14:43:29.317722 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.317727 | orchestrator | Wednesday 14 May 2025 14:36:35 +0000 (0:00:00.508) 0:06:01.736 ********* 2025-05-14 14:43:29.317731 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317736 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317740 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317744 | orchestrator | 2025-05-14 14:43:29.317749 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.317754 | orchestrator | Wednesday 14 May 2025 14:36:35 +0000 (0:00:00.734) 0:06:02.471 ********* 2025-05-14 14:43:29.317758 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317762 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317767 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317771 | orchestrator | 2025-05-14 14:43:29.317776 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-14 14:43:29.317780 | orchestrator | Wednesday 14 May 2025 14:36:36 +0000 (0:00:00.634) 0:06:03.105 ********* 2025-05-14 14:43:29.317785 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:29.317789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.317794 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.317798 | orchestrator | 2025-05-14 14:43:29.317803 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-14 14:43:29.317807 | orchestrator | Wednesday 14 May 2025 14:36:37 +0000 (0:00:00.973) 0:06:04.079 ********* 2025-05-14 14:43:29.317812 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.317816 | orchestrator | 2025-05-14 14:43:29.317821 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-14 14:43:29.317825 | orchestrator | Wednesday 14 May 2025 14:36:38 +0000 (0:00:00.854) 0:06:04.934 ********* 2025-05-14 14:43:29.317830 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.317834 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.317839 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.317843 | orchestrator | 2025-05-14 14:43:29.317847 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-14 14:43:29.317852 | orchestrator | Wednesday 14 May 2025 14:36:39 +0000 (0:00:00.889) 0:06:05.823 ********* 2025-05-14 14:43:29.317856 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.317861 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.317865 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.317870 | orchestrator | 2025-05-14 14:43:29.317874 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-14 14:43:29.317879 | orchestrator | Wednesday 14 May 2025 14:36:39 +0000 (0:00:00.377) 0:06:06.201 ********* 2025-05-14 14:43:29.317883 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:43:29.317888 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:43:29.317892 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:43:29.317900 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-14 14:43:29.317904 | orchestrator | 2025-05-14 14:43:29.317909 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-14 14:43:29.317913 | orchestrator | Wednesday 14 May 2025 14:36:48 +0000 (0:00:08.384) 0:06:14.585 ********* 2025-05-14 14:43:29.317918 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.317922 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.317927 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.317931 | orchestrator | 2025-05-14 14:43:29.317951 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-14 14:43:29.317957 | orchestrator | Wednesday 14 May 2025 14:36:48 +0000 (0:00:00.447) 0:06:15.032 ********* 2025-05-14 14:43:29.317965 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 14:43:29.317969 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 14:43:29.317974 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 14:43:29.317978 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 14:43:29.317983 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:43:29.317987 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:43:29.317992 | orchestrator | 2025-05-14 14:43:29.317996 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-14 14:43:29.318000 | orchestrator | Wednesday 14 May 2025 14:36:50 +0000 (0:00:01.871) 0:06:16.904 ********* 2025-05-14 14:43:29.318005 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 14:43:29.318009 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 14:43:29.318033 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 14:43:29.318038 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:43:29.318042 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 14:43:29.318047 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 14:43:29.318051 | orchestrator | 2025-05-14 14:43:29.318056 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-14 14:43:29.318060 | orchestrator | Wednesday 14 May 2025 14:36:51 +0000 (0:00:01.232) 0:06:18.137 ********* 2025-05-14 14:43:29.318065 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.318069 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.318074 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.318078 | orchestrator | 2025-05-14 14:43:29.318083 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-14 14:43:29.318088 | orchestrator | Wednesday 14 May 2025 14:36:52 +0000 (0:00:00.652) 0:06:18.790 ********* 2025-05-14 14:43:29.318092 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318097 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.318101 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.318105 | orchestrator | 2025-05-14 14:43:29.318110 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-14 14:43:29.318115 | orchestrator | Wednesday 14 May 2025 14:36:52 +0000 (0:00:00.549) 0:06:19.339 ********* 2025-05-14 14:43:29.318119 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318124 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.318128 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.318133 | orchestrator | 2025-05-14 14:43:29.318137 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-14 14:43:29.318142 | orchestrator | Wednesday 14 May 2025 14:36:53 +0000 (0:00:00.306) 0:06:19.645 ********* 2025-05-14 14:43:29.318146 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.318151 | orchestrator | 2025-05-14 14:43:29.318155 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-14 14:43:29.318160 | orchestrator | Wednesday 14 May 2025 14:36:53 +0000 (0:00:00.479) 0:06:20.124 ********* 2025-05-14 14:43:29.318168 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318173 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.318177 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.318182 | orchestrator | 2025-05-14 14:43:29.318186 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-14 14:43:29.318191 | orchestrator | Wednesday 14 May 2025 14:36:54 +0000 (0:00:00.461) 0:06:20.586 ********* 2025-05-14 14:43:29.318195 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318200 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.318204 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.318209 | orchestrator | 2025-05-14 14:43:29.318213 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-14 14:43:29.318218 | orchestrator | Wednesday 14 May 2025 14:36:54 +0000 (0:00:00.313) 0:06:20.899 ********* 2025-05-14 14:43:29.318222 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.318227 | orchestrator | 2025-05-14 14:43:29.318231 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-14 14:43:29.318236 | orchestrator | Wednesday 14 May 2025 14:36:54 +0000 (0:00:00.469) 0:06:21.368 ********* 2025-05-14 14:43:29.318240 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318245 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318249 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318254 | orchestrator | 2025-05-14 14:43:29.318258 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-14 14:43:29.318263 | orchestrator | Wednesday 14 May 2025 14:36:56 +0000 (0:00:01.291) 0:06:22.659 ********* 2025-05-14 14:43:29.318267 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318272 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318276 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318281 | orchestrator | 2025-05-14 14:43:29.318285 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-14 14:43:29.318290 | orchestrator | Wednesday 14 May 2025 14:36:57 +0000 (0:00:01.106) 0:06:23.766 ********* 2025-05-14 14:43:29.318294 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318299 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318303 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318308 | orchestrator | 2025-05-14 14:43:29.318312 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-14 14:43:29.318317 | orchestrator | Wednesday 14 May 2025 14:36:58 +0000 (0:00:01.703) 0:06:25.470 ********* 2025-05-14 14:43:29.318321 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318326 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318330 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318335 | orchestrator | 2025-05-14 14:43:29.318339 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-14 14:43:29.318360 | orchestrator | Wednesday 14 May 2025 14:37:01 +0000 (0:00:02.583) 0:06:28.053 ********* 2025-05-14 14:43:29.318365 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318375 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.318380 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-14 14:43:29.318384 | orchestrator | 2025-05-14 14:43:29.318388 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-14 14:43:29.318393 | orchestrator | Wednesday 14 May 2025 14:37:02 +0000 (0:00:00.593) 0:06:28.646 ********* 2025-05-14 14:43:29.318397 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-14 14:43:29.318402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-14 14:43:29.318407 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.318411 | orchestrator | 2025-05-14 14:43:29.318415 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-14 14:43:29.318424 | orchestrator | Wednesday 14 May 2025 14:37:15 +0000 (0:00:13.514) 0:06:42.161 ********* 2025-05-14 14:43:29.318429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.318433 | orchestrator | 2025-05-14 14:43:29.318437 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-14 14:43:29.318442 | orchestrator | Wednesday 14 May 2025 14:37:17 +0000 (0:00:01.878) 0:06:44.039 ********* 2025-05-14 14:43:29.318446 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.318451 | orchestrator | 2025-05-14 14:43:29.318455 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-14 14:43:29.318460 | orchestrator | Wednesday 14 May 2025 14:37:17 +0000 (0:00:00.483) 0:06:44.523 ********* 2025-05-14 14:43:29.318464 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.318469 | orchestrator | 2025-05-14 14:43:29.318475 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-14 14:43:29.318483 | orchestrator | Wednesday 14 May 2025 14:37:18 +0000 (0:00:00.317) 0:06:44.841 ********* 2025-05-14 14:43:29.318490 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-14 14:43:29.318498 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-14 14:43:29.318506 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-14 14:43:29.318514 | orchestrator | 2025-05-14 14:43:29.318521 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-14 14:43:29.318529 | orchestrator | Wednesday 14 May 2025 14:37:24 +0000 (0:00:06.381) 0:06:51.222 ********* 2025-05-14 14:43:29.318534 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-14 14:43:29.318538 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-14 14:43:29.318543 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-14 14:43:29.318548 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-14 14:43:29.318552 | orchestrator | 2025-05-14 14:43:29.318557 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.318561 | orchestrator | Wednesday 14 May 2025 14:37:30 +0000 (0:00:05.506) 0:06:56.729 ********* 2025-05-14 14:43:29.318565 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318570 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318574 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318579 | orchestrator | 2025-05-14 14:43:29.318583 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 14:43:29.318588 | orchestrator | Wednesday 14 May 2025 14:37:30 +0000 (0:00:00.689) 0:06:57.418 ********* 2025-05-14 14:43:29.318592 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:29.318597 | orchestrator | 2025-05-14 14:43:29.318601 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 14:43:29.318635 | orchestrator | Wednesday 14 May 2025 14:37:31 +0000 (0:00:00.769) 0:06:58.188 ********* 2025-05-14 14:43:29.318641 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.318646 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.318651 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.318655 | orchestrator | 2025-05-14 14:43:29.318660 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 14:43:29.318664 | orchestrator | Wednesday 14 May 2025 14:37:31 +0000 (0:00:00.331) 0:06:58.520 ********* 2025-05-14 14:43:29.318668 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318673 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318677 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318682 | orchestrator | 2025-05-14 14:43:29.318686 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 14:43:29.318690 | orchestrator | Wednesday 14 May 2025 14:37:33 +0000 (0:00:01.175) 0:06:59.695 ********* 2025-05-14 14:43:29.318695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:43:29.318703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:43:29.318708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:43:29.318712 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.318716 | orchestrator | 2025-05-14 14:43:29.318721 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 14:43:29.318725 | orchestrator | Wednesday 14 May 2025 14:37:34 +0000 (0:00:00.928) 0:07:00.624 ********* 2025-05-14 14:43:29.318730 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.318734 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.318739 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.318743 | orchestrator | 2025-05-14 14:43:29.318748 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.318752 | orchestrator | Wednesday 14 May 2025 14:37:34 +0000 (0:00:00.361) 0:07:00.986 ********* 2025-05-14 14:43:29.318756 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.318780 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.318785 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.318789 | orchestrator | 2025-05-14 14:43:29.318797 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-14 14:43:29.318801 | orchestrator | 2025-05-14 14:43:29.318805 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.318809 | orchestrator | Wednesday 14 May 2025 14:37:36 +0000 (0:00:02.021) 0:07:03.007 ********* 2025-05-14 14:43:29.318813 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.318817 | orchestrator | 2025-05-14 14:43:29.318821 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.318825 | orchestrator | Wednesday 14 May 2025 14:37:37 +0000 (0:00:00.763) 0:07:03.771 ********* 2025-05-14 14:43:29.318829 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.318833 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.318837 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.318841 | orchestrator | 2025-05-14 14:43:29.318845 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.318849 | orchestrator | Wednesday 14 May 2025 14:37:37 +0000 (0:00:00.342) 0:07:04.113 ********* 2025-05-14 14:43:29.318853 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.318857 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.318861 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.318865 | orchestrator | 2025-05-14 14:43:29.318869 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.318873 | orchestrator | Wednesday 14 May 2025 14:37:38 +0000 (0:00:00.711) 0:07:04.824 ********* 2025-05-14 14:43:29.318878 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.318882 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.318886 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.318890 | orchestrator | 2025-05-14 14:43:29.318894 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.318898 | orchestrator | Wednesday 14 May 2025 14:37:39 +0000 (0:00:01.065) 0:07:05.890 ********* 2025-05-14 14:43:29.318902 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.318906 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.318910 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.318914 | orchestrator | 2025-05-14 14:43:29.318918 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.318922 | orchestrator | Wednesday 14 May 2025 14:37:40 +0000 (0:00:00.791) 0:07:06.681 ********* 2025-05-14 14:43:29.318926 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.318930 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.318934 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.318938 | orchestrator | 2025-05-14 14:43:29.318942 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.318949 | orchestrator | Wednesday 14 May 2025 14:37:40 +0000 (0:00:00.352) 0:07:07.034 ********* 2025-05-14 14:43:29.318953 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.318957 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.318962 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.318966 | orchestrator | 2025-05-14 14:43:29.318970 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.318974 | orchestrator | Wednesday 14 May 2025 14:37:40 +0000 (0:00:00.352) 0:07:07.387 ********* 2025-05-14 14:43:29.318978 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.318982 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.318986 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.318990 | orchestrator | 2025-05-14 14:43:29.318994 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.318998 | orchestrator | Wednesday 14 May 2025 14:37:41 +0000 (0:00:00.579) 0:07:07.967 ********* 2025-05-14 14:43:29.319002 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319006 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319010 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319014 | orchestrator | 2025-05-14 14:43:29.319018 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.319022 | orchestrator | Wednesday 14 May 2025 14:37:41 +0000 (0:00:00.317) 0:07:08.284 ********* 2025-05-14 14:43:29.319026 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319030 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319034 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319038 | orchestrator | 2025-05-14 14:43:29.319042 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.319046 | orchestrator | Wednesday 14 May 2025 14:37:42 +0000 (0:00:00.340) 0:07:08.624 ********* 2025-05-14 14:43:29.319050 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319054 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319058 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319062 | orchestrator | 2025-05-14 14:43:29.319066 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.319070 | orchestrator | Wednesday 14 May 2025 14:37:42 +0000 (0:00:00.342) 0:07:08.966 ********* 2025-05-14 14:43:29.319074 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.319078 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.319082 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.319086 | orchestrator | 2025-05-14 14:43:29.319090 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.319094 | orchestrator | Wednesday 14 May 2025 14:37:43 +0000 (0:00:01.083) 0:07:10.050 ********* 2025-05-14 14:43:29.319098 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319102 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319106 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319110 | orchestrator | 2025-05-14 14:43:29.319114 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.319118 | orchestrator | Wednesday 14 May 2025 14:37:43 +0000 (0:00:00.354) 0:07:10.404 ********* 2025-05-14 14:43:29.319122 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319126 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319130 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319134 | orchestrator | 2025-05-14 14:43:29.319138 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.319156 | orchestrator | Wednesday 14 May 2025 14:37:44 +0000 (0:00:00.309) 0:07:10.713 ********* 2025-05-14 14:43:29.319161 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.319168 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.319172 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.319176 | orchestrator | 2025-05-14 14:43:29.319180 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.319185 | orchestrator | Wednesday 14 May 2025 14:37:44 +0000 (0:00:00.333) 0:07:11.047 ********* 2025-05-14 14:43:29.319192 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.319196 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.319200 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.319204 | orchestrator | 2025-05-14 14:43:29.319208 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.319212 | orchestrator | Wednesday 14 May 2025 14:37:45 +0000 (0:00:00.680) 0:07:11.728 ********* 2025-05-14 14:43:29.319216 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.319220 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.319224 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.319228 | orchestrator | 2025-05-14 14:43:29.319232 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.319236 | orchestrator | Wednesday 14 May 2025 14:37:45 +0000 (0:00:00.376) 0:07:12.104 ********* 2025-05-14 14:43:29.319240 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319244 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319248 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319252 | orchestrator | 2025-05-14 14:43:29.319256 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.319260 | orchestrator | Wednesday 14 May 2025 14:37:45 +0000 (0:00:00.315) 0:07:12.419 ********* 2025-05-14 14:43:29.319264 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319268 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319272 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319276 | orchestrator | 2025-05-14 14:43:29.319280 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.319284 | orchestrator | Wednesday 14 May 2025 14:37:46 +0000 (0:00:00.312) 0:07:12.732 ********* 2025-05-14 14:43:29.319288 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319292 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319296 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319300 | orchestrator | 2025-05-14 14:43:29.319304 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.319308 | orchestrator | Wednesday 14 May 2025 14:37:46 +0000 (0:00:00.600) 0:07:13.332 ********* 2025-05-14 14:43:29.319312 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.319316 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.319320 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.319324 | orchestrator | 2025-05-14 14:43:29.319328 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.319332 | orchestrator | Wednesday 14 May 2025 14:37:47 +0000 (0:00:00.386) 0:07:13.719 ********* 2025-05-14 14:43:29.319336 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319340 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319344 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319348 | orchestrator | 2025-05-14 14:43:29.319352 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.319356 | orchestrator | Wednesday 14 May 2025 14:37:47 +0000 (0:00:00.359) 0:07:14.078 ********* 2025-05-14 14:43:29.319360 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319364 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319368 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319372 | orchestrator | 2025-05-14 14:43:29.319376 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.319380 | orchestrator | Wednesday 14 May 2025 14:37:47 +0000 (0:00:00.303) 0:07:14.382 ********* 2025-05-14 14:43:29.319384 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319388 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319392 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319396 | orchestrator | 2025-05-14 14:43:29.319400 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.319405 | orchestrator | Wednesday 14 May 2025 14:37:48 +0000 (0:00:00.534) 0:07:14.916 ********* 2025-05-14 14:43:29.319409 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319416 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319420 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319424 | orchestrator | 2025-05-14 14:43:29.319428 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.319432 | orchestrator | Wednesday 14 May 2025 14:37:48 +0000 (0:00:00.308) 0:07:15.225 ********* 2025-05-14 14:43:29.319436 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319440 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319444 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319448 | orchestrator | 2025-05-14 14:43:29.319452 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.319456 | orchestrator | Wednesday 14 May 2025 14:37:48 +0000 (0:00:00.285) 0:07:15.511 ********* 2025-05-14 14:43:29.319460 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319464 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319468 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319472 | orchestrator | 2025-05-14 14:43:29.319476 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.319480 | orchestrator | Wednesday 14 May 2025 14:37:49 +0000 (0:00:00.277) 0:07:15.789 ********* 2025-05-14 14:43:29.319484 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319488 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319493 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319499 | orchestrator | 2025-05-14 14:43:29.319507 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.319513 | orchestrator | Wednesday 14 May 2025 14:37:49 +0000 (0:00:00.456) 0:07:16.245 ********* 2025-05-14 14:43:29.319521 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319528 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319535 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319541 | orchestrator | 2025-05-14 14:43:29.319564 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.319569 | orchestrator | Wednesday 14 May 2025 14:37:50 +0000 (0:00:00.290) 0:07:16.535 ********* 2025-05-14 14:43:29.319576 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319580 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319584 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319588 | orchestrator | 2025-05-14 14:43:29.319592 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.319596 | orchestrator | Wednesday 14 May 2025 14:37:50 +0000 (0:00:00.335) 0:07:16.871 ********* 2025-05-14 14:43:29.319601 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319604 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319622 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319626 | orchestrator | 2025-05-14 14:43:29.319630 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.319634 | orchestrator | Wednesday 14 May 2025 14:37:50 +0000 (0:00:00.311) 0:07:17.182 ********* 2025-05-14 14:43:29.319638 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319642 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319646 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319650 | orchestrator | 2025-05-14 14:43:29.319654 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.319658 | orchestrator | Wednesday 14 May 2025 14:37:51 +0000 (0:00:00.454) 0:07:17.636 ********* 2025-05-14 14:43:29.319662 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319666 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319670 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319674 | orchestrator | 2025-05-14 14:43:29.319678 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.319683 | orchestrator | Wednesday 14 May 2025 14:37:51 +0000 (0:00:00.299) 0:07:17.936 ********* 2025-05-14 14:43:29.319692 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.319697 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.319701 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319705 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.319709 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.319713 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319717 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.319721 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.319725 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319729 | orchestrator | 2025-05-14 14:43:29.319733 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.319737 | orchestrator | Wednesday 14 May 2025 14:37:51 +0000 (0:00:00.324) 0:07:18.260 ********* 2025-05-14 14:43:29.319741 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 14:43:29.319745 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 14:43:29.319749 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319753 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 14:43:29.319757 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 14:43:29.319761 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319765 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 14:43:29.319769 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 14:43:29.319773 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319777 | orchestrator | 2025-05-14 14:43:29.319781 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.319785 | orchestrator | Wednesday 14 May 2025 14:37:52 +0000 (0:00:00.341) 0:07:18.601 ********* 2025-05-14 14:43:29.319789 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319794 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319798 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319802 | orchestrator | 2025-05-14 14:43:29.319806 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.319810 | orchestrator | Wednesday 14 May 2025 14:37:52 +0000 (0:00:00.470) 0:07:19.072 ********* 2025-05-14 14:43:29.319814 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319818 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319822 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319826 | orchestrator | 2025-05-14 14:43:29.319830 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.319834 | orchestrator | Wednesday 14 May 2025 14:37:52 +0000 (0:00:00.273) 0:07:19.345 ********* 2025-05-14 14:43:29.319838 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319846 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319850 | orchestrator | 2025-05-14 14:43:29.319854 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.319858 | orchestrator | Wednesday 14 May 2025 14:37:53 +0000 (0:00:00.287) 0:07:19.632 ********* 2025-05-14 14:43:29.319862 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319866 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319870 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319875 | orchestrator | 2025-05-14 14:43:29.319879 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.319883 | orchestrator | Wednesday 14 May 2025 14:37:53 +0000 (0:00:00.282) 0:07:19.914 ********* 2025-05-14 14:43:29.319887 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319891 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319895 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319899 | orchestrator | 2025-05-14 14:43:29.319903 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.319911 | orchestrator | Wednesday 14 May 2025 14:37:53 +0000 (0:00:00.571) 0:07:20.486 ********* 2025-05-14 14:43:29.319915 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319919 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.319923 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.319927 | orchestrator | 2025-05-14 14:43:29.319946 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.319954 | orchestrator | Wednesday 14 May 2025 14:37:54 +0000 (0:00:00.408) 0:07:20.895 ********* 2025-05-14 14:43:29.319958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.319962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.319966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.319970 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.319974 | orchestrator | 2025-05-14 14:43:29.319978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.319982 | orchestrator | Wednesday 14 May 2025 14:37:54 +0000 (0:00:00.486) 0:07:21.382 ********* 2025-05-14 14:43:29.319986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.319990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.319995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.320002 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320009 | orchestrator | 2025-05-14 14:43:29.320015 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.320020 | orchestrator | Wednesday 14 May 2025 14:37:55 +0000 (0:00:00.432) 0:07:21.815 ********* 2025-05-14 14:43:29.320026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.320032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.320038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.320045 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320051 | orchestrator | 2025-05-14 14:43:29.320057 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.320064 | orchestrator | Wednesday 14 May 2025 14:37:55 +0000 (0:00:00.438) 0:07:22.254 ********* 2025-05-14 14:43:29.320071 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320075 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320079 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320083 | orchestrator | 2025-05-14 14:43:29.320087 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.320091 | orchestrator | Wednesday 14 May 2025 14:37:56 +0000 (0:00:00.319) 0:07:22.573 ********* 2025-05-14 14:43:29.320095 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.320099 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320103 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.320107 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320111 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.320115 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320119 | orchestrator | 2025-05-14 14:43:29.320123 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.320127 | orchestrator | Wednesday 14 May 2025 14:37:56 +0000 (0:00:00.623) 0:07:23.196 ********* 2025-05-14 14:43:29.320131 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320135 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320139 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320143 | orchestrator | 2025-05-14 14:43:29.320147 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.320151 | orchestrator | Wednesday 14 May 2025 14:37:56 +0000 (0:00:00.284) 0:07:23.481 ********* 2025-05-14 14:43:29.320155 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320165 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320169 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320173 | orchestrator | 2025-05-14 14:43:29.320177 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.320181 | orchestrator | Wednesday 14 May 2025 14:37:57 +0000 (0:00:00.299) 0:07:23.780 ********* 2025-05-14 14:43:29.320185 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.320189 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320193 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.320197 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320201 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.320205 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320209 | orchestrator | 2025-05-14 14:43:29.320213 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.320217 | orchestrator | Wednesday 14 May 2025 14:37:57 +0000 (0:00:00.385) 0:07:24.166 ********* 2025-05-14 14:43:29.320221 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.320225 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320230 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.320234 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320238 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.320242 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320246 | orchestrator | 2025-05-14 14:43:29.320250 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.320254 | orchestrator | Wednesday 14 May 2025 14:37:58 +0000 (0:00:00.474) 0:07:24.641 ********* 2025-05-14 14:43:29.320258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.320262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.320266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.320270 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.320274 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.320278 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.320298 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320303 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320311 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.320315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.320319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.320323 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320327 | orchestrator | 2025-05-14 14:43:29.320331 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.320335 | orchestrator | Wednesday 14 May 2025 14:37:58 +0000 (0:00:00.612) 0:07:25.253 ********* 2025-05-14 14:43:29.320339 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320343 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320347 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320352 | orchestrator | 2025-05-14 14:43:29.320356 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.320360 | orchestrator | Wednesday 14 May 2025 14:37:59 +0000 (0:00:00.648) 0:07:25.902 ********* 2025-05-14 14:43:29.320364 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.320368 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320372 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.320376 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320380 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.320388 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320392 | orchestrator | 2025-05-14 14:43:29.320396 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.320400 | orchestrator | Wednesday 14 May 2025 14:37:59 +0000 (0:00:00.498) 0:07:26.400 ********* 2025-05-14 14:43:29.320404 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320408 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320412 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320416 | orchestrator | 2025-05-14 14:43:29.320420 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.320425 | orchestrator | Wednesday 14 May 2025 14:38:00 +0000 (0:00:00.638) 0:07:27.038 ********* 2025-05-14 14:43:29.320429 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320433 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320437 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320441 | orchestrator | 2025-05-14 14:43:29.320445 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-14 14:43:29.320449 | orchestrator | Wednesday 14 May 2025 14:38:00 +0000 (0:00:00.483) 0:07:27.522 ********* 2025-05-14 14:43:29.320453 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.320457 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.320461 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.320466 | orchestrator | 2025-05-14 14:43:29.320470 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-14 14:43:29.320474 | orchestrator | Wednesday 14 May 2025 14:38:01 +0000 (0:00:00.311) 0:07:27.834 ********* 2025-05-14 14:43:29.320478 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 14:43:29.320482 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:43:29.320486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:43:29.320490 | orchestrator | 2025-05-14 14:43:29.320494 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-14 14:43:29.320498 | orchestrator | Wednesday 14 May 2025 14:38:02 +0000 (0:00:01.077) 0:07:28.911 ********* 2025-05-14 14:43:29.320502 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.320506 | orchestrator | 2025-05-14 14:43:29.320510 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-14 14:43:29.320515 | orchestrator | Wednesday 14 May 2025 14:38:02 +0000 (0:00:00.579) 0:07:29.491 ********* 2025-05-14 14:43:29.320522 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320528 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320535 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320543 | orchestrator | 2025-05-14 14:43:29.320550 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-14 14:43:29.320556 | orchestrator | Wednesday 14 May 2025 14:38:03 +0000 (0:00:00.313) 0:07:29.804 ********* 2025-05-14 14:43:29.320562 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320570 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320574 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320578 | orchestrator | 2025-05-14 14:43:29.320582 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-14 14:43:29.320586 | orchestrator | Wednesday 14 May 2025 14:38:03 +0000 (0:00:00.613) 0:07:30.418 ********* 2025-05-14 14:43:29.320591 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320595 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320599 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320603 | orchestrator | 2025-05-14 14:43:29.320620 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-14 14:43:29.320625 | orchestrator | Wednesday 14 May 2025 14:38:04 +0000 (0:00:00.357) 0:07:30.775 ********* 2025-05-14 14:43:29.320629 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320639 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320643 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320647 | orchestrator | 2025-05-14 14:43:29.320651 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-14 14:43:29.320655 | orchestrator | Wednesday 14 May 2025 14:38:04 +0000 (0:00:00.345) 0:07:31.120 ********* 2025-05-14 14:43:29.320659 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.320663 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.320667 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.320671 | orchestrator | 2025-05-14 14:43:29.320675 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-14 14:43:29.320680 | orchestrator | Wednesday 14 May 2025 14:38:05 +0000 (0:00:00.655) 0:07:31.776 ********* 2025-05-14 14:43:29.320700 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.320705 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.320709 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.320713 | orchestrator | 2025-05-14 14:43:29.320721 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-14 14:43:29.320725 | orchestrator | Wednesday 14 May 2025 14:38:06 +0000 (0:00:00.758) 0:07:32.535 ********* 2025-05-14 14:43:29.320729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 14:43:29.320733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 14:43:29.320737 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 14:43:29.320741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 14:43:29.320746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 14:43:29.320750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 14:43:29.320754 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 14:43:29.320758 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 14:43:29.320762 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 14:43:29.320766 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 14:43:29.320771 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 14:43:29.320774 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 14:43:29.320778 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 14:43:29.320783 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 14:43:29.320787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 14:43:29.320791 | orchestrator | 2025-05-14 14:43:29.320795 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-14 14:43:29.320799 | orchestrator | Wednesday 14 May 2025 14:38:10 +0000 (0:00:04.502) 0:07:37.038 ********* 2025-05-14 14:43:29.320803 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.320807 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.320811 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.320815 | orchestrator | 2025-05-14 14:43:29.320819 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-14 14:43:29.320823 | orchestrator | Wednesday 14 May 2025 14:38:10 +0000 (0:00:00.325) 0:07:37.364 ********* 2025-05-14 14:43:29.320827 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.320831 | orchestrator | 2025-05-14 14:43:29.320835 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-14 14:43:29.320843 | orchestrator | Wednesday 14 May 2025 14:38:11 +0000 (0:00:00.838) 0:07:38.202 ********* 2025-05-14 14:43:29.320847 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 14:43:29.320851 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 14:43:29.320855 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 14:43:29.320859 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-14 14:43:29.320864 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-14 14:43:29.320868 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-14 14:43:29.320872 | orchestrator | 2025-05-14 14:43:29.320876 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-14 14:43:29.320880 | orchestrator | Wednesday 14 May 2025 14:38:12 +0000 (0:00:01.094) 0:07:39.297 ********* 2025-05-14 14:43:29.320884 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:43:29.320888 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.320892 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 14:43:29.320896 | orchestrator | 2025-05-14 14:43:29.320900 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-14 14:43:29.320904 | orchestrator | Wednesday 14 May 2025 14:38:14 +0000 (0:00:01.745) 0:07:41.043 ********* 2025-05-14 14:43:29.320908 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:43:29.320912 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.320916 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.320921 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:43:29.320925 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.320929 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.320933 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:43:29.320937 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.320941 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.320945 | orchestrator | 2025-05-14 14:43:29.320949 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-14 14:43:29.320953 | orchestrator | Wednesday 14 May 2025 14:38:16 +0000 (0:00:01.495) 0:07:42.539 ********* 2025-05-14 14:43:29.320957 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.320961 | orchestrator | 2025-05-14 14:43:29.320965 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-14 14:43:29.320983 | orchestrator | Wednesday 14 May 2025 14:38:18 +0000 (0:00:02.215) 0:07:44.754 ********* 2025-05-14 14:43:29.320992 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.320997 | orchestrator | 2025-05-14 14:43:29.321001 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-14 14:43:29.321005 | orchestrator | Wednesday 14 May 2025 14:38:18 +0000 (0:00:00.587) 0:07:45.342 ********* 2025-05-14 14:43:29.321009 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321013 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321017 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321021 | orchestrator | 2025-05-14 14:43:29.321025 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-14 14:43:29.321029 | orchestrator | Wednesday 14 May 2025 14:38:19 +0000 (0:00:00.542) 0:07:45.884 ********* 2025-05-14 14:43:29.321033 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321037 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321041 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321045 | orchestrator | 2025-05-14 14:43:29.321049 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-14 14:43:29.321053 | orchestrator | Wednesday 14 May 2025 14:38:19 +0000 (0:00:00.325) 0:07:46.209 ********* 2025-05-14 14:43:29.321061 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321065 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321069 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321073 | orchestrator | 2025-05-14 14:43:29.321077 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-14 14:43:29.321082 | orchestrator | Wednesday 14 May 2025 14:38:19 +0000 (0:00:00.304) 0:07:46.514 ********* 2025-05-14 14:43:29.321086 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.321090 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.321094 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.321098 | orchestrator | 2025-05-14 14:43:29.321102 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-14 14:43:29.321106 | orchestrator | Wednesday 14 May 2025 14:38:20 +0000 (0:00:00.272) 0:07:46.786 ********* 2025-05-14 14:43:29.321110 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.321114 | orchestrator | 2025-05-14 14:43:29.321118 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-14 14:43:29.321122 | orchestrator | Wednesday 14 May 2025 14:38:20 +0000 (0:00:00.661) 0:07:47.447 ********* 2025-05-14 14:43:29.321126 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dde3cc5c-c032-592e-96b0-b740b8614a8d', 'data_vg': 'ceph-dde3cc5c-c032-592e-96b0-b740b8614a8d'}) 2025-05-14 14:43:29.321132 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6', 'data_vg': 'ceph-904dffa8-69ed-5eff-9e62-bfdd56e5c3c6'}) 2025-05-14 14:43:29.321136 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd', 'data_vg': 'ceph-5e8c3a6b-4eea-5bb3-8225-c520f5fcabbd'}) 2025-05-14 14:43:29.321140 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5402478b-0937-58a5-a80f-00ed6e381d0d', 'data_vg': 'ceph-5402478b-0937-58a5-a80f-00ed6e381d0d'}) 2025-05-14 14:43:29.321144 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6248da54-4321-5f95-9f37-ef0f81563cc8', 'data_vg': 'ceph-6248da54-4321-5f95-9f37-ef0f81563cc8'}) 2025-05-14 14:43:29.321148 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46afb65a-1642-5955-80d8-115babed40cc', 'data_vg': 'ceph-46afb65a-1642-5955-80d8-115babed40cc'}) 2025-05-14 14:43:29.321152 | orchestrator | 2025-05-14 14:43:29.321156 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-14 14:43:29.321160 | orchestrator | Wednesday 14 May 2025 14:39:01 +0000 (0:00:40.618) 0:08:28.066 ********* 2025-05-14 14:43:29.321164 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321169 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321173 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321177 | orchestrator | 2025-05-14 14:43:29.321181 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-14 14:43:29.321185 | orchestrator | Wednesday 14 May 2025 14:39:01 +0000 (0:00:00.445) 0:08:28.512 ********* 2025-05-14 14:43:29.321189 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.321193 | orchestrator | 2025-05-14 14:43:29.321197 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-14 14:43:29.321201 | orchestrator | Wednesday 14 May 2025 14:39:02 +0000 (0:00:00.566) 0:08:29.078 ********* 2025-05-14 14:43:29.321205 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.321209 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.321213 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.321217 | orchestrator | 2025-05-14 14:43:29.321221 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-14 14:43:29.321226 | orchestrator | Wednesday 14 May 2025 14:39:03 +0000 (0:00:00.653) 0:08:29.732 ********* 2025-05-14 14:43:29.321230 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.321237 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.321241 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.321245 | orchestrator | 2025-05-14 14:43:29.321249 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-14 14:43:29.321253 | orchestrator | Wednesday 14 May 2025 14:39:05 +0000 (0:00:01.956) 0:08:31.689 ********* 2025-05-14 14:43:29.321271 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.321276 | orchestrator | 2025-05-14 14:43:29.321283 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-14 14:43:29.321287 | orchestrator | Wednesday 14 May 2025 14:39:05 +0000 (0:00:00.613) 0:08:32.302 ********* 2025-05-14 14:43:29.321291 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.321295 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.321299 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.321303 | orchestrator | 2025-05-14 14:43:29.321307 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-14 14:43:29.321311 | orchestrator | Wednesday 14 May 2025 14:39:07 +0000 (0:00:01.477) 0:08:33.780 ********* 2025-05-14 14:43:29.321315 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.321319 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.321323 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.321327 | orchestrator | 2025-05-14 14:43:29.321331 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-14 14:43:29.321335 | orchestrator | Wednesday 14 May 2025 14:39:08 +0000 (0:00:01.188) 0:08:34.968 ********* 2025-05-14 14:43:29.321339 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.321343 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.321347 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.321351 | orchestrator | 2025-05-14 14:43:29.321355 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-14 14:43:29.321359 | orchestrator | Wednesday 14 May 2025 14:39:10 +0000 (0:00:01.686) 0:08:36.655 ********* 2025-05-14 14:43:29.321363 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321367 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321371 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321375 | orchestrator | 2025-05-14 14:43:29.321379 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-14 14:43:29.321383 | orchestrator | Wednesday 14 May 2025 14:39:10 +0000 (0:00:00.401) 0:08:37.056 ********* 2025-05-14 14:43:29.321387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321391 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321395 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321399 | orchestrator | 2025-05-14 14:43:29.321403 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-14 14:43:29.321408 | orchestrator | Wednesday 14 May 2025 14:39:11 +0000 (0:00:00.700) 0:08:37.757 ********* 2025-05-14 14:43:29.321412 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-05-14 14:43:29.321416 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 14:43:29.321420 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-14 14:43:29.321424 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-14 14:43:29.321428 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-05-14 14:43:29.321432 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-05-14 14:43:29.321436 | orchestrator | 2025-05-14 14:43:29.321440 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-14 14:43:29.321444 | orchestrator | Wednesday 14 May 2025 14:39:12 +0000 (0:00:01.051) 0:08:38.808 ********* 2025-05-14 14:43:29.321448 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-05-14 14:43:29.321452 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-05-14 14:43:29.321456 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-14 14:43:29.321460 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-14 14:43:29.321464 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-05-14 14:43:29.321472 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-14 14:43:29.321476 | orchestrator | 2025-05-14 14:43:29.321480 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-14 14:43:29.321484 | orchestrator | Wednesday 14 May 2025 14:39:15 +0000 (0:00:03.475) 0:08:42.284 ********* 2025-05-14 14:43:29.321488 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321492 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321496 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.321500 | orchestrator | 2025-05-14 14:43:29.321504 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-14 14:43:29.321508 | orchestrator | Wednesday 14 May 2025 14:39:18 +0000 (0:00:02.709) 0:08:44.993 ********* 2025-05-14 14:43:29.321512 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321516 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321520 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-14 14:43:29.321524 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.321528 | orchestrator | 2025-05-14 14:43:29.321533 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-14 14:43:29.321539 | orchestrator | Wednesday 14 May 2025 14:39:31 +0000 (0:00:12.635) 0:08:57.628 ********* 2025-05-14 14:43:29.321546 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321552 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321559 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321567 | orchestrator | 2025-05-14 14:43:29.321574 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-14 14:43:29.321581 | orchestrator | Wednesday 14 May 2025 14:39:31 +0000 (0:00:00.477) 0:08:58.105 ********* 2025-05-14 14:43:29.321587 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321594 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321599 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321602 | orchestrator | 2025-05-14 14:43:29.321636 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.321641 | orchestrator | Wednesday 14 May 2025 14:39:32 +0000 (0:00:01.117) 0:08:59.223 ********* 2025-05-14 14:43:29.321645 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.321649 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.321653 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.321657 | orchestrator | 2025-05-14 14:43:29.321661 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 14:43:29.321665 | orchestrator | Wednesday 14 May 2025 14:39:33 +0000 (0:00:00.901) 0:09:00.125 ********* 2025-05-14 14:43:29.321685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.321690 | orchestrator | 2025-05-14 14:43:29.321695 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-14 14:43:29.321699 | orchestrator | Wednesday 14 May 2025 14:39:34 +0000 (0:00:00.562) 0:09:00.687 ********* 2025-05-14 14:43:29.321703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.321707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.321711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.321715 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321719 | orchestrator | 2025-05-14 14:43:29.321722 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-14 14:43:29.321726 | orchestrator | Wednesday 14 May 2025 14:39:34 +0000 (0:00:00.418) 0:09:01.106 ********* 2025-05-14 14:43:29.321730 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321734 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321738 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321742 | orchestrator | 2025-05-14 14:43:29.321747 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-14 14:43:29.321755 | orchestrator | Wednesday 14 May 2025 14:39:34 +0000 (0:00:00.322) 0:09:01.428 ********* 2025-05-14 14:43:29.321759 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321763 | orchestrator | 2025-05-14 14:43:29.321767 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-14 14:43:29.321771 | orchestrator | Wednesday 14 May 2025 14:39:35 +0000 (0:00:00.247) 0:09:01.676 ********* 2025-05-14 14:43:29.321775 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321779 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321783 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321786 | orchestrator | 2025-05-14 14:43:29.321790 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-14 14:43:29.321794 | orchestrator | Wednesday 14 May 2025 14:39:35 +0000 (0:00:00.588) 0:09:02.264 ********* 2025-05-14 14:43:29.321798 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321802 | orchestrator | 2025-05-14 14:43:29.321807 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-14 14:43:29.321811 | orchestrator | Wednesday 14 May 2025 14:39:35 +0000 (0:00:00.254) 0:09:02.519 ********* 2025-05-14 14:43:29.321815 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321819 | orchestrator | 2025-05-14 14:43:29.321823 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 14:43:29.321827 | orchestrator | Wednesday 14 May 2025 14:39:36 +0000 (0:00:00.240) 0:09:02.759 ********* 2025-05-14 14:43:29.321831 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321834 | orchestrator | 2025-05-14 14:43:29.321838 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-14 14:43:29.321842 | orchestrator | Wednesday 14 May 2025 14:39:36 +0000 (0:00:00.133) 0:09:02.893 ********* 2025-05-14 14:43:29.321846 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321850 | orchestrator | 2025-05-14 14:43:29.321854 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-14 14:43:29.321858 | orchestrator | Wednesday 14 May 2025 14:39:36 +0000 (0:00:00.226) 0:09:03.119 ********* 2025-05-14 14:43:29.321862 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321866 | orchestrator | 2025-05-14 14:43:29.321870 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-14 14:43:29.321874 | orchestrator | Wednesday 14 May 2025 14:39:36 +0000 (0:00:00.234) 0:09:03.353 ********* 2025-05-14 14:43:29.321878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.321882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.321886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.321890 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321894 | orchestrator | 2025-05-14 14:43:29.321898 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-14 14:43:29.321902 | orchestrator | Wednesday 14 May 2025 14:39:37 +0000 (0:00:00.392) 0:09:03.745 ********* 2025-05-14 14:43:29.321906 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321910 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.321914 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.321918 | orchestrator | 2025-05-14 14:43:29.321922 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-14 14:43:29.321926 | orchestrator | Wednesday 14 May 2025 14:39:37 +0000 (0:00:00.581) 0:09:04.327 ********* 2025-05-14 14:43:29.321930 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.321934 | orchestrator | 2025-05-14 14:43:29.321938 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-14 14:43:29.322002 | orchestrator | Wednesday 14 May 2025 14:39:38 +0000 (0:00:00.250) 0:09:04.577 ********* 2025-05-14 14:43:29.322056 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322060 | orchestrator | 2025-05-14 14:43:29.322064 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.322073 | orchestrator | Wednesday 14 May 2025 14:39:38 +0000 (0:00:00.234) 0:09:04.812 ********* 2025-05-14 14:43:29.322077 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.322081 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.322085 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.322089 | orchestrator | 2025-05-14 14:43:29.322093 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-14 14:43:29.322097 | orchestrator | 2025-05-14 14:43:29.322101 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.322105 | orchestrator | Wednesday 14 May 2025 14:39:41 +0000 (0:00:03.118) 0:09:07.930 ********* 2025-05-14 14:43:29.322109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.322115 | orchestrator | 2025-05-14 14:43:29.322119 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.322141 | orchestrator | Wednesday 14 May 2025 14:39:42 +0000 (0:00:01.490) 0:09:09.421 ********* 2025-05-14 14:43:29.322146 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322153 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322157 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322162 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322165 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322169 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322173 | orchestrator | 2025-05-14 14:43:29.322176 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.322180 | orchestrator | Wednesday 14 May 2025 14:39:43 +0000 (0:00:00.758) 0:09:10.179 ********* 2025-05-14 14:43:29.322184 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322187 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322191 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322195 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322198 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322202 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322205 | orchestrator | 2025-05-14 14:43:29.322209 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.322213 | orchestrator | Wednesday 14 May 2025 14:39:44 +0000 (0:00:01.091) 0:09:11.271 ********* 2025-05-14 14:43:29.322217 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322220 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322224 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322227 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322231 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322235 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322238 | orchestrator | 2025-05-14 14:43:29.322242 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.322246 | orchestrator | Wednesday 14 May 2025 14:39:45 +0000 (0:00:00.951) 0:09:12.222 ********* 2025-05-14 14:43:29.322249 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322253 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322257 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322260 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322264 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322268 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322271 | orchestrator | 2025-05-14 14:43:29.322275 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.322279 | orchestrator | Wednesday 14 May 2025 14:39:46 +0000 (0:00:01.212) 0:09:13.435 ********* 2025-05-14 14:43:29.322282 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322286 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322290 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322293 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322297 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322301 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322304 | orchestrator | 2025-05-14 14:43:29.322311 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.322315 | orchestrator | Wednesday 14 May 2025 14:39:47 +0000 (0:00:01.034) 0:09:14.470 ********* 2025-05-14 14:43:29.322318 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322322 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322326 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322329 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322333 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322337 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322340 | orchestrator | 2025-05-14 14:43:29.322344 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.322348 | orchestrator | Wednesday 14 May 2025 14:39:48 +0000 (0:00:00.716) 0:09:15.187 ********* 2025-05-14 14:43:29.322352 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322355 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322359 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322362 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322366 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322370 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322374 | orchestrator | 2025-05-14 14:43:29.322377 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.322381 | orchestrator | Wednesday 14 May 2025 14:39:49 +0000 (0:00:00.854) 0:09:16.042 ********* 2025-05-14 14:43:29.322385 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322388 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322392 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322396 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322399 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322403 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322406 | orchestrator | 2025-05-14 14:43:29.322410 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.322414 | orchestrator | Wednesday 14 May 2025 14:39:50 +0000 (0:00:00.638) 0:09:16.680 ********* 2025-05-14 14:43:29.322417 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322421 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322425 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322432 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322435 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322439 | orchestrator | 2025-05-14 14:43:29.322443 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.322447 | orchestrator | Wednesday 14 May 2025 14:39:51 +0000 (0:00:00.924) 0:09:17.604 ********* 2025-05-14 14:43:29.322450 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322454 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322457 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322465 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322468 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322472 | orchestrator | 2025-05-14 14:43:29.322476 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.322483 | orchestrator | Wednesday 14 May 2025 14:39:51 +0000 (0:00:00.689) 0:09:18.293 ********* 2025-05-14 14:43:29.322487 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322491 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322494 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322498 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322502 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322505 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322509 | orchestrator | 2025-05-14 14:43:29.322526 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.322534 | orchestrator | Wednesday 14 May 2025 14:39:53 +0000 (0:00:01.345) 0:09:19.639 ********* 2025-05-14 14:43:29.322542 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322546 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322549 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322553 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322559 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322566 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322572 | orchestrator | 2025-05-14 14:43:29.322578 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.322585 | orchestrator | Wednesday 14 May 2025 14:39:53 +0000 (0:00:00.736) 0:09:20.375 ********* 2025-05-14 14:43:29.322592 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322599 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322605 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322626 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322630 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322634 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322637 | orchestrator | 2025-05-14 14:43:29.322641 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.322645 | orchestrator | Wednesday 14 May 2025 14:39:54 +0000 (0:00:00.982) 0:09:21.358 ********* 2025-05-14 14:43:29.322649 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322652 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322656 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322660 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322663 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322667 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322671 | orchestrator | 2025-05-14 14:43:29.322674 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.322678 | orchestrator | Wednesday 14 May 2025 14:39:55 +0000 (0:00:00.742) 0:09:22.100 ********* 2025-05-14 14:43:29.322682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322685 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322689 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322693 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322696 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322700 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322704 | orchestrator | 2025-05-14 14:43:29.322707 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.322711 | orchestrator | Wednesday 14 May 2025 14:39:56 +0000 (0:00:00.947) 0:09:23.047 ********* 2025-05-14 14:43:29.322715 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322718 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322722 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322726 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322729 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322733 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322736 | orchestrator | 2025-05-14 14:43:29.322740 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.322744 | orchestrator | Wednesday 14 May 2025 14:39:57 +0000 (0:00:00.719) 0:09:23.767 ********* 2025-05-14 14:43:29.322747 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322751 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322755 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322758 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322762 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322765 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322769 | orchestrator | 2025-05-14 14:43:29.322773 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.322776 | orchestrator | Wednesday 14 May 2025 14:39:58 +0000 (0:00:00.825) 0:09:24.593 ********* 2025-05-14 14:43:29.322780 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322784 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322798 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322802 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322805 | orchestrator | 2025-05-14 14:43:29.322809 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.322813 | orchestrator | Wednesday 14 May 2025 14:39:58 +0000 (0:00:00.646) 0:09:25.239 ********* 2025-05-14 14:43:29.322816 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322820 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322824 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322827 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322831 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322835 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322838 | orchestrator | 2025-05-14 14:43:29.322842 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.322846 | orchestrator | Wednesday 14 May 2025 14:39:59 +0000 (0:00:00.831) 0:09:26.071 ********* 2025-05-14 14:43:29.322849 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.322853 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.322856 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.322860 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.322864 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.322867 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.322871 | orchestrator | 2025-05-14 14:43:29.322875 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.322879 | orchestrator | Wednesday 14 May 2025 14:40:00 +0000 (0:00:00.683) 0:09:26.754 ********* 2025-05-14 14:43:29.322882 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322886 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322889 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322893 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322896 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322900 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322904 | orchestrator | 2025-05-14 14:43:29.322907 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.322911 | orchestrator | Wednesday 14 May 2025 14:40:01 +0000 (0:00:00.853) 0:09:27.607 ********* 2025-05-14 14:43:29.322915 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322918 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322922 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322926 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322929 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322947 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322952 | orchestrator | 2025-05-14 14:43:29.322958 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.322962 | orchestrator | Wednesday 14 May 2025 14:40:01 +0000 (0:00:00.651) 0:09:28.258 ********* 2025-05-14 14:43:29.322966 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.322969 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.322973 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.322976 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.322980 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.322984 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.322987 | orchestrator | 2025-05-14 14:43:29.322991 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.322995 | orchestrator | Wednesday 14 May 2025 14:40:02 +0000 (0:00:00.868) 0:09:29.126 ********* 2025-05-14 14:43:29.322999 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323002 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323006 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323009 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323013 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323017 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323020 | orchestrator | 2025-05-14 14:43:29.323024 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.323031 | orchestrator | Wednesday 14 May 2025 14:40:03 +0000 (0:00:00.648) 0:09:29.775 ********* 2025-05-14 14:43:29.323035 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323038 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323042 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323046 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323049 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323053 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323057 | orchestrator | 2025-05-14 14:43:29.323060 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.323064 | orchestrator | Wednesday 14 May 2025 14:40:04 +0000 (0:00:00.915) 0:09:30.690 ********* 2025-05-14 14:43:29.323068 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323071 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323075 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323079 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323082 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323086 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323089 | orchestrator | 2025-05-14 14:43:29.323093 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.323097 | orchestrator | Wednesday 14 May 2025 14:40:04 +0000 (0:00:00.647) 0:09:31.338 ********* 2025-05-14 14:43:29.323101 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323104 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323108 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323112 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323115 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323119 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323122 | orchestrator | 2025-05-14 14:43:29.323126 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.323130 | orchestrator | Wednesday 14 May 2025 14:40:05 +0000 (0:00:00.897) 0:09:32.236 ********* 2025-05-14 14:43:29.323134 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323141 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323145 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323148 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323152 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323156 | orchestrator | 2025-05-14 14:43:29.323160 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.323163 | orchestrator | Wednesday 14 May 2025 14:40:06 +0000 (0:00:00.709) 0:09:32.945 ********* 2025-05-14 14:43:29.323167 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323171 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323174 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323178 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323185 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323189 | orchestrator | 2025-05-14 14:43:29.323193 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.323197 | orchestrator | Wednesday 14 May 2025 14:40:07 +0000 (0:00:00.913) 0:09:33.859 ********* 2025-05-14 14:43:29.323200 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323204 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323207 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323211 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323215 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323218 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323222 | orchestrator | 2025-05-14 14:43:29.323226 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.323229 | orchestrator | Wednesday 14 May 2025 14:40:08 +0000 (0:00:00.687) 0:09:34.546 ********* 2025-05-14 14:43:29.323236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323240 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323243 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323247 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323251 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323255 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323258 | orchestrator | 2025-05-14 14:43:29.323262 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.323266 | orchestrator | Wednesday 14 May 2025 14:40:08 +0000 (0:00:00.909) 0:09:35.456 ********* 2025-05-14 14:43:29.323269 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323273 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323277 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323280 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323284 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323287 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323291 | orchestrator | 2025-05-14 14:43:29.323307 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.323314 | orchestrator | Wednesday 14 May 2025 14:40:09 +0000 (0:00:00.663) 0:09:36.120 ********* 2025-05-14 14:43:29.323318 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.323321 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 14:43:29.323325 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323328 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.323332 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 14:43:29.323336 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323339 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.323343 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 14:43:29.323347 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323350 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.323354 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.323358 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.323361 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.323365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323369 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323372 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.323376 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.323380 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323383 | orchestrator | 2025-05-14 14:43:29.323387 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.323391 | orchestrator | Wednesday 14 May 2025 14:40:10 +0000 (0:00:00.957) 0:09:37.078 ********* 2025-05-14 14:43:29.323394 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 14:43:29.323398 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 14:43:29.323402 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323405 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 14:43:29.323409 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 14:43:29.323413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323416 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 14:43:29.323420 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 14:43:29.323424 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323427 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 14:43:29.323431 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 14:43:29.323435 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323438 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 14:43:29.323446 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 14:43:29.323450 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323453 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 14:43:29.323457 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 14:43:29.323461 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323464 | orchestrator | 2025-05-14 14:43:29.323468 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.323472 | orchestrator | Wednesday 14 May 2025 14:40:11 +0000 (0:00:00.695) 0:09:37.773 ********* 2025-05-14 14:43:29.323475 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323479 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323487 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323491 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323494 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323498 | orchestrator | 2025-05-14 14:43:29.323502 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.323505 | orchestrator | Wednesday 14 May 2025 14:40:12 +0000 (0:00:00.989) 0:09:38.762 ********* 2025-05-14 14:43:29.323509 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323513 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323516 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323520 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323524 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323527 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323531 | orchestrator | 2025-05-14 14:43:29.323535 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.323539 | orchestrator | Wednesday 14 May 2025 14:40:12 +0000 (0:00:00.722) 0:09:39.485 ********* 2025-05-14 14:43:29.323543 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323546 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323550 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323554 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323557 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323561 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323565 | orchestrator | 2025-05-14 14:43:29.323568 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.323572 | orchestrator | Wednesday 14 May 2025 14:40:13 +0000 (0:00:00.881) 0:09:40.366 ********* 2025-05-14 14:43:29.323576 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323582 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323588 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323594 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323601 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323622 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323627 | orchestrator | 2025-05-14 14:43:29.323635 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.323639 | orchestrator | Wednesday 14 May 2025 14:40:14 +0000 (0:00:00.688) 0:09:41.055 ********* 2025-05-14 14:43:29.323643 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323646 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323650 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323654 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323675 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323679 | orchestrator | 2025-05-14 14:43:29.323686 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.323689 | orchestrator | Wednesday 14 May 2025 14:40:15 +0000 (0:00:00.925) 0:09:41.981 ********* 2025-05-14 14:43:29.323693 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323701 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323705 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323708 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323712 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323716 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323719 | orchestrator | 2025-05-14 14:43:29.323723 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.323727 | orchestrator | Wednesday 14 May 2025 14:40:16 +0000 (0:00:00.686) 0:09:42.667 ********* 2025-05-14 14:43:29.323730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.323734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.323738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.323741 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323745 | orchestrator | 2025-05-14 14:43:29.323749 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.323752 | orchestrator | Wednesday 14 May 2025 14:40:16 +0000 (0:00:00.434) 0:09:43.102 ********* 2025-05-14 14:43:29.323756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.323760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.323763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.323767 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323771 | orchestrator | 2025-05-14 14:43:29.323774 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.323778 | orchestrator | Wednesday 14 May 2025 14:40:17 +0000 (0:00:00.437) 0:09:43.539 ********* 2025-05-14 14:43:29.323782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.323785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.323789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.323793 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323796 | orchestrator | 2025-05-14 14:43:29.323800 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.323804 | orchestrator | Wednesday 14 May 2025 14:40:17 +0000 (0:00:00.737) 0:09:44.277 ********* 2025-05-14 14:43:29.323807 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323811 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323815 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323818 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323822 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323825 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323829 | orchestrator | 2025-05-14 14:43:29.323833 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.323836 | orchestrator | Wednesday 14 May 2025 14:40:18 +0000 (0:00:01.242) 0:09:45.520 ********* 2025-05-14 14:43:29.323840 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.323844 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323848 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.323851 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.323855 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323859 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.323862 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323866 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323870 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.323873 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323877 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.323881 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323884 | orchestrator | 2025-05-14 14:43:29.323888 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.323892 | orchestrator | Wednesday 14 May 2025 14:40:20 +0000 (0:00:01.139) 0:09:46.659 ********* 2025-05-14 14:43:29.323900 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323904 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323907 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323911 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323914 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323918 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323922 | orchestrator | 2025-05-14 14:43:29.323925 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.323929 | orchestrator | Wednesday 14 May 2025 14:40:20 +0000 (0:00:00.789) 0:09:47.449 ********* 2025-05-14 14:43:29.323933 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323936 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323940 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323944 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323947 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.323951 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.323955 | orchestrator | 2025-05-14 14:43:29.323958 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.323962 | orchestrator | Wednesday 14 May 2025 14:40:21 +0000 (0:00:00.613) 0:09:48.063 ********* 2025-05-14 14:43:29.323966 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 14:43:29.323969 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 14:43:29.323973 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.323977 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 14:43:29.323980 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.323984 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.323987 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.323991 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.323995 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.323998 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324015 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.324019 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324023 | orchestrator | 2025-05-14 14:43:29.324029 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.324033 | orchestrator | Wednesday 14 May 2025 14:40:22 +0000 (0:00:01.175) 0:09:49.239 ********* 2025-05-14 14:43:29.324037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324040 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324044 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324048 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.324052 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324055 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.324059 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324063 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.324067 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324070 | orchestrator | 2025-05-14 14:43:29.324074 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.324078 | orchestrator | Wednesday 14 May 2025 14:40:23 +0000 (0:00:00.569) 0:09:49.809 ********* 2025-05-14 14:43:29.324081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 14:43:29.324085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 14:43:29.324089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 14:43:29.324092 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324096 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 14:43:29.324105 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 14:43:29.324109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 14:43:29.324112 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324116 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 14:43:29.324119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 14:43:29.324123 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 14:43:29.324127 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.324134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.324137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.324141 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.324145 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.324148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.324152 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324155 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.324163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.324166 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.324170 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324173 | orchestrator | 2025-05-14 14:43:29.324177 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.324181 | orchestrator | Wednesday 14 May 2025 14:40:24 +0000 (0:00:01.224) 0:09:51.033 ********* 2025-05-14 14:43:29.324185 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324188 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324192 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324196 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324199 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324203 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324207 | orchestrator | 2025-05-14 14:43:29.324210 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.324214 | orchestrator | Wednesday 14 May 2025 14:40:25 +0000 (0:00:01.231) 0:09:52.265 ********* 2025-05-14 14:43:29.324218 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324221 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324225 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324229 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.324232 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324236 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.324239 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324243 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.324247 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324250 | orchestrator | 2025-05-14 14:43:29.324254 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.324258 | orchestrator | Wednesday 14 May 2025 14:40:26 +0000 (0:00:01.219) 0:09:53.485 ********* 2025-05-14 14:43:29.324261 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324265 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324269 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324272 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324276 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324280 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324283 | orchestrator | 2025-05-14 14:43:29.324287 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.324291 | orchestrator | Wednesday 14 May 2025 14:40:28 +0000 (0:00:01.163) 0:09:54.648 ********* 2025-05-14 14:43:29.324298 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:29.324302 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:29.324305 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:29.324311 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324315 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324318 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324322 | orchestrator | 2025-05-14 14:43:29.324328 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-14 14:43:29.324332 | orchestrator | Wednesday 14 May 2025 14:40:29 +0000 (0:00:01.119) 0:09:55.767 ********* 2025-05-14 14:43:29.324336 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.324339 | orchestrator | 2025-05-14 14:43:29.324343 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-14 14:43:29.324346 | orchestrator | Wednesday 14 May 2025 14:40:32 +0000 (0:00:03.446) 0:09:59.214 ********* 2025-05-14 14:43:29.324350 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.324354 | orchestrator | 2025-05-14 14:43:29.324357 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-14 14:43:29.324361 | orchestrator | Wednesday 14 May 2025 14:40:34 +0000 (0:00:01.760) 0:10:00.975 ********* 2025-05-14 14:43:29.324365 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.324369 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.324372 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.324376 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.324380 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.324383 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.324387 | orchestrator | 2025-05-14 14:43:29.324391 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-14 14:43:29.324394 | orchestrator | Wednesday 14 May 2025 14:40:36 +0000 (0:00:01.929) 0:10:02.904 ********* 2025-05-14 14:43:29.324398 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.324402 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.324405 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.324409 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.324412 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.324416 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.324420 | orchestrator | 2025-05-14 14:43:29.324423 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-14 14:43:29.324427 | orchestrator | Wednesday 14 May 2025 14:40:37 +0000 (0:00:01.476) 0:10:04.381 ********* 2025-05-14 14:43:29.324431 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.324435 | orchestrator | 2025-05-14 14:43:29.324439 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-14 14:43:29.324443 | orchestrator | Wednesday 14 May 2025 14:40:39 +0000 (0:00:01.333) 0:10:05.714 ********* 2025-05-14 14:43:29.324447 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.324450 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.324454 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.324458 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.324461 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.324465 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.324469 | orchestrator | 2025-05-14 14:43:29.324472 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-14 14:43:29.324476 | orchestrator | Wednesday 14 May 2025 14:40:41 +0000 (0:00:01.982) 0:10:07.697 ********* 2025-05-14 14:43:29.324480 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.324483 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.324487 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.324490 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.324494 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.324498 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.324501 | orchestrator | 2025-05-14 14:43:29.324505 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-14 14:43:29.324513 | orchestrator | Wednesday 14 May 2025 14:40:45 +0000 (0:00:04.303) 0:10:12.000 ********* 2025-05-14 14:43:29.324517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.324521 | orchestrator | 2025-05-14 14:43:29.324524 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-14 14:43:29.324528 | orchestrator | Wednesday 14 May 2025 14:40:46 +0000 (0:00:01.201) 0:10:13.202 ********* 2025-05-14 14:43:29.324532 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.324535 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.324539 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.324543 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.324546 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.324550 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.324554 | orchestrator | 2025-05-14 14:43:29.324557 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-14 14:43:29.324561 | orchestrator | Wednesday 14 May 2025 14:40:47 +0000 (0:00:00.650) 0:10:13.853 ********* 2025-05-14 14:43:29.324565 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:29.324568 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:29.324572 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.324576 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.324579 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.324583 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:29.324587 | orchestrator | 2025-05-14 14:43:29.324590 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-14 14:43:29.324594 | orchestrator | Wednesday 14 May 2025 14:40:49 +0000 (0:00:02.517) 0:10:16.371 ********* 2025-05-14 14:43:29.324599 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:29.324605 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:29.324710 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:29.324714 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.324718 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.324721 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.324725 | orchestrator | 2025-05-14 14:43:29.324729 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-14 14:43:29.324733 | orchestrator | 2025-05-14 14:43:29.324737 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.324740 | orchestrator | Wednesday 14 May 2025 14:40:52 +0000 (0:00:02.201) 0:10:18.572 ********* 2025-05-14 14:43:29.324753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.324757 | orchestrator | 2025-05-14 14:43:29.324765 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.324769 | orchestrator | Wednesday 14 May 2025 14:40:52 +0000 (0:00:00.628) 0:10:19.201 ********* 2025-05-14 14:43:29.324773 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324777 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324780 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324784 | orchestrator | 2025-05-14 14:43:29.324788 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.324792 | orchestrator | Wednesday 14 May 2025 14:40:52 +0000 (0:00:00.292) 0:10:19.493 ********* 2025-05-14 14:43:29.324795 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.324799 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.324803 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.324806 | orchestrator | 2025-05-14 14:43:29.324810 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.324814 | orchestrator | Wednesday 14 May 2025 14:40:53 +0000 (0:00:00.666) 0:10:20.159 ********* 2025-05-14 14:43:29.324818 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.324821 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.324830 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.324834 | orchestrator | 2025-05-14 14:43:29.324837 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.324841 | orchestrator | Wednesday 14 May 2025 14:40:54 +0000 (0:00:00.722) 0:10:20.882 ********* 2025-05-14 14:43:29.324845 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.324848 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.324852 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.324856 | orchestrator | 2025-05-14 14:43:29.324860 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.324863 | orchestrator | Wednesday 14 May 2025 14:40:55 +0000 (0:00:00.936) 0:10:21.818 ********* 2025-05-14 14:43:29.324867 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324871 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324874 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324878 | orchestrator | 2025-05-14 14:43:29.324882 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.324885 | orchestrator | Wednesday 14 May 2025 14:40:55 +0000 (0:00:00.285) 0:10:22.103 ********* 2025-05-14 14:43:29.324889 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324893 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324897 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324900 | orchestrator | 2025-05-14 14:43:29.324904 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.324908 | orchestrator | Wednesday 14 May 2025 14:40:55 +0000 (0:00:00.334) 0:10:22.438 ********* 2025-05-14 14:43:29.324911 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324915 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324919 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324922 | orchestrator | 2025-05-14 14:43:29.324926 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.324930 | orchestrator | Wednesday 14 May 2025 14:40:56 +0000 (0:00:00.317) 0:10:22.755 ********* 2025-05-14 14:43:29.324934 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324937 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324941 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324945 | orchestrator | 2025-05-14 14:43:29.324949 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.324952 | orchestrator | Wednesday 14 May 2025 14:40:56 +0000 (0:00:00.614) 0:10:23.370 ********* 2025-05-14 14:43:29.324956 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324960 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324964 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324967 | orchestrator | 2025-05-14 14:43:29.324971 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.324975 | orchestrator | Wednesday 14 May 2025 14:40:57 +0000 (0:00:00.329) 0:10:23.699 ********* 2025-05-14 14:43:29.324978 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.324982 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.324986 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.324989 | orchestrator | 2025-05-14 14:43:29.324993 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.324997 | orchestrator | Wednesday 14 May 2025 14:40:57 +0000 (0:00:00.330) 0:10:24.030 ********* 2025-05-14 14:43:29.325001 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.325004 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.325008 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.325012 | orchestrator | 2025-05-14 14:43:29.325015 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.325019 | orchestrator | Wednesday 14 May 2025 14:40:58 +0000 (0:00:00.752) 0:10:24.782 ********* 2025-05-14 14:43:29.325023 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325026 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325030 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325034 | orchestrator | 2025-05-14 14:43:29.325041 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.325044 | orchestrator | Wednesday 14 May 2025 14:40:58 +0000 (0:00:00.618) 0:10:25.401 ********* 2025-05-14 14:43:29.325048 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325052 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325055 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325059 | orchestrator | 2025-05-14 14:43:29.325063 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.325066 | orchestrator | Wednesday 14 May 2025 14:40:59 +0000 (0:00:00.335) 0:10:25.737 ********* 2025-05-14 14:43:29.325070 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.325074 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.325078 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.325081 | orchestrator | 2025-05-14 14:43:29.325085 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.325089 | orchestrator | Wednesday 14 May 2025 14:40:59 +0000 (0:00:00.361) 0:10:26.098 ********* 2025-05-14 14:43:29.325092 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.325096 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.325100 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.325106 | orchestrator | 2025-05-14 14:43:29.325110 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.325116 | orchestrator | Wednesday 14 May 2025 14:40:59 +0000 (0:00:00.346) 0:10:26.445 ********* 2025-05-14 14:43:29.325120 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.325124 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.325127 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.325131 | orchestrator | 2025-05-14 14:43:29.325135 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.325138 | orchestrator | Wednesday 14 May 2025 14:41:00 +0000 (0:00:00.660) 0:10:27.105 ********* 2025-05-14 14:43:29.325142 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325146 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325149 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325153 | orchestrator | 2025-05-14 14:43:29.325157 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.325160 | orchestrator | Wednesday 14 May 2025 14:41:00 +0000 (0:00:00.331) 0:10:27.437 ********* 2025-05-14 14:43:29.325164 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325168 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325171 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325175 | orchestrator | 2025-05-14 14:43:29.325179 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.325182 | orchestrator | Wednesday 14 May 2025 14:41:01 +0000 (0:00:00.324) 0:10:27.761 ********* 2025-05-14 14:43:29.325186 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325190 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325193 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325197 | orchestrator | 2025-05-14 14:43:29.325201 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.325204 | orchestrator | Wednesday 14 May 2025 14:41:01 +0000 (0:00:00.311) 0:10:28.073 ********* 2025-05-14 14:43:29.325208 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.325212 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.325215 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.325219 | orchestrator | 2025-05-14 14:43:29.325223 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.325226 | orchestrator | Wednesday 14 May 2025 14:41:02 +0000 (0:00:00.608) 0:10:28.681 ********* 2025-05-14 14:43:29.325230 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325234 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325237 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325241 | orchestrator | 2025-05-14 14:43:29.325244 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.325251 | orchestrator | Wednesday 14 May 2025 14:41:02 +0000 (0:00:00.366) 0:10:29.048 ********* 2025-05-14 14:43:29.325255 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325259 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325263 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325266 | orchestrator | 2025-05-14 14:43:29.325270 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.325274 | orchestrator | Wednesday 14 May 2025 14:41:02 +0000 (0:00:00.346) 0:10:29.394 ********* 2025-05-14 14:43:29.325277 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325281 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325285 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325288 | orchestrator | 2025-05-14 14:43:29.325292 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.325295 | orchestrator | Wednesday 14 May 2025 14:41:03 +0000 (0:00:00.332) 0:10:29.726 ********* 2025-05-14 14:43:29.325299 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325303 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325306 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325310 | orchestrator | 2025-05-14 14:43:29.325314 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.325317 | orchestrator | Wednesday 14 May 2025 14:41:03 +0000 (0:00:00.668) 0:10:30.395 ********* 2025-05-14 14:43:29.325321 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325325 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325328 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325332 | orchestrator | 2025-05-14 14:43:29.325336 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.325339 | orchestrator | Wednesday 14 May 2025 14:41:04 +0000 (0:00:00.358) 0:10:30.754 ********* 2025-05-14 14:43:29.325343 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325347 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325350 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325354 | orchestrator | 2025-05-14 14:43:29.325358 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.325361 | orchestrator | Wednesday 14 May 2025 14:41:04 +0000 (0:00:00.360) 0:10:31.115 ********* 2025-05-14 14:43:29.325365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325369 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325372 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325376 | orchestrator | 2025-05-14 14:43:29.325380 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.325383 | orchestrator | Wednesday 14 May 2025 14:41:04 +0000 (0:00:00.347) 0:10:31.462 ********* 2025-05-14 14:43:29.325387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325391 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325394 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325398 | orchestrator | 2025-05-14 14:43:29.325402 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.325405 | orchestrator | Wednesday 14 May 2025 14:41:05 +0000 (0:00:00.628) 0:10:32.090 ********* 2025-05-14 14:43:29.325409 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325413 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325416 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325420 | orchestrator | 2025-05-14 14:43:29.325424 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.325427 | orchestrator | Wednesday 14 May 2025 14:41:05 +0000 (0:00:00.366) 0:10:32.457 ********* 2025-05-14 14:43:29.325433 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325437 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325443 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325447 | orchestrator | 2025-05-14 14:43:29.325451 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.325457 | orchestrator | Wednesday 14 May 2025 14:41:06 +0000 (0:00:00.370) 0:10:32.828 ********* 2025-05-14 14:43:29.325461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325465 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325469 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325472 | orchestrator | 2025-05-14 14:43:29.325476 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.325480 | orchestrator | Wednesday 14 May 2025 14:41:06 +0000 (0:00:00.393) 0:10:33.222 ********* 2025-05-14 14:43:29.325483 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325487 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325491 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325494 | orchestrator | 2025-05-14 14:43:29.325498 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.325502 | orchestrator | Wednesday 14 May 2025 14:41:07 +0000 (0:00:00.702) 0:10:33.924 ********* 2025-05-14 14:43:29.325506 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.325509 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.325513 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.325517 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.325520 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325524 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325528 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.325531 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.325535 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325539 | orchestrator | 2025-05-14 14:43:29.325542 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.325546 | orchestrator | Wednesday 14 May 2025 14:41:07 +0000 (0:00:00.494) 0:10:34.419 ********* 2025-05-14 14:43:29.325550 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 14:43:29.325553 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 14:43:29.325557 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 14:43:29.325561 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 14:43:29.325565 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325568 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325572 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 14:43:29.325575 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 14:43:29.325579 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325583 | orchestrator | 2025-05-14 14:43:29.325587 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.325590 | orchestrator | Wednesday 14 May 2025 14:41:08 +0000 (0:00:00.517) 0:10:34.936 ********* 2025-05-14 14:43:29.325594 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325598 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325601 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325623 | orchestrator | 2025-05-14 14:43:29.325627 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.325631 | orchestrator | Wednesday 14 May 2025 14:41:08 +0000 (0:00:00.339) 0:10:35.276 ********* 2025-05-14 14:43:29.325634 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325638 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325642 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325645 | orchestrator | 2025-05-14 14:43:29.325649 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.325653 | orchestrator | Wednesday 14 May 2025 14:41:09 +0000 (0:00:00.481) 0:10:35.757 ********* 2025-05-14 14:43:29.325657 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325660 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325667 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325671 | orchestrator | 2025-05-14 14:43:29.325674 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.325678 | orchestrator | Wednesday 14 May 2025 14:41:09 +0000 (0:00:00.317) 0:10:36.074 ********* 2025-05-14 14:43:29.325682 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325685 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325689 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325693 | orchestrator | 2025-05-14 14:43:29.325696 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.325700 | orchestrator | Wednesday 14 May 2025 14:41:09 +0000 (0:00:00.338) 0:10:36.413 ********* 2025-05-14 14:43:29.325704 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325707 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325711 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325715 | orchestrator | 2025-05-14 14:43:29.325718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.325722 | orchestrator | Wednesday 14 May 2025 14:41:10 +0000 (0:00:00.356) 0:10:36.770 ********* 2025-05-14 14:43:29.325726 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325729 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325733 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325737 | orchestrator | 2025-05-14 14:43:29.325740 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.325744 | orchestrator | Wednesday 14 May 2025 14:41:10 +0000 (0:00:00.630) 0:10:37.400 ********* 2025-05-14 14:43:29.325748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.325751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.325755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.325759 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325762 | orchestrator | 2025-05-14 14:43:29.325769 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.325775 | orchestrator | Wednesday 14 May 2025 14:41:11 +0000 (0:00:00.576) 0:10:37.977 ********* 2025-05-14 14:43:29.325779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.325783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.325786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.325790 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325794 | orchestrator | 2025-05-14 14:43:29.325797 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.325801 | orchestrator | Wednesday 14 May 2025 14:41:12 +0000 (0:00:00.599) 0:10:38.576 ********* 2025-05-14 14:43:29.325805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.325808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.325812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.325816 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325819 | orchestrator | 2025-05-14 14:43:29.325823 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.325827 | orchestrator | Wednesday 14 May 2025 14:41:12 +0000 (0:00:00.451) 0:10:39.028 ********* 2025-05-14 14:43:29.325830 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325834 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325838 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325841 | orchestrator | 2025-05-14 14:43:29.325845 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.325849 | orchestrator | Wednesday 14 May 2025 14:41:12 +0000 (0:00:00.442) 0:10:39.470 ********* 2025-05-14 14:43:29.325852 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.325856 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325867 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.325871 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325874 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.325878 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325881 | orchestrator | 2025-05-14 14:43:29.325885 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.325889 | orchestrator | Wednesday 14 May 2025 14:41:13 +0000 (0:00:00.728) 0:10:40.199 ********* 2025-05-14 14:43:29.325892 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325896 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325900 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325903 | orchestrator | 2025-05-14 14:43:29.325907 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.325911 | orchestrator | Wednesday 14 May 2025 14:41:14 +0000 (0:00:00.740) 0:10:40.939 ********* 2025-05-14 14:43:29.325914 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325918 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325922 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325925 | orchestrator | 2025-05-14 14:43:29.325929 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.325933 | orchestrator | Wednesday 14 May 2025 14:41:14 +0000 (0:00:00.385) 0:10:41.325 ********* 2025-05-14 14:43:29.325937 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.325940 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325944 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.325948 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325951 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.325955 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325959 | orchestrator | 2025-05-14 14:43:29.325962 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.325966 | orchestrator | Wednesday 14 May 2025 14:41:15 +0000 (0:00:00.545) 0:10:41.870 ********* 2025-05-14 14:43:29.325970 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.325973 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.325977 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.325981 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.325985 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.325988 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.325992 | orchestrator | 2025-05-14 14:43:29.325996 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.325999 | orchestrator | Wednesday 14 May 2025 14:41:16 +0000 (0:00:00.685) 0:10:42.555 ********* 2025-05-14 14:43:29.326003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.326007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.326010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.326036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.326040 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.326044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.326048 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326051 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.326059 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.326062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.326066 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326073 | orchestrator | 2025-05-14 14:43:29.326076 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.326083 | orchestrator | Wednesday 14 May 2025 14:41:16 +0000 (0:00:00.717) 0:10:43.272 ********* 2025-05-14 14:43:29.326087 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326093 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326097 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326100 | orchestrator | 2025-05-14 14:43:29.326104 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.326149 | orchestrator | Wednesday 14 May 2025 14:41:17 +0000 (0:00:00.829) 0:10:44.102 ********* 2025-05-14 14:43:29.326153 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.326157 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326161 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.326164 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326168 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.326172 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326175 | orchestrator | 2025-05-14 14:43:29.326179 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.326183 | orchestrator | Wednesday 14 May 2025 14:41:18 +0000 (0:00:00.613) 0:10:44.716 ********* 2025-05-14 14:43:29.326187 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326190 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326194 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326198 | orchestrator | 2025-05-14 14:43:29.326201 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.326205 | orchestrator | Wednesday 14 May 2025 14:41:19 +0000 (0:00:00.918) 0:10:45.634 ********* 2025-05-14 14:43:29.326209 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326212 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326216 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326220 | orchestrator | 2025-05-14 14:43:29.326223 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-14 14:43:29.326227 | orchestrator | Wednesday 14 May 2025 14:41:19 +0000 (0:00:00.616) 0:10:46.251 ********* 2025-05-14 14:43:29.326231 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326234 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326238 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-14 14:43:29.326242 | orchestrator | 2025-05-14 14:43:29.326246 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-14 14:43:29.326250 | orchestrator | Wednesday 14 May 2025 14:41:20 +0000 (0:00:00.422) 0:10:46.674 ********* 2025-05-14 14:43:29.326253 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.326257 | orchestrator | 2025-05-14 14:43:29.326261 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-14 14:43:29.326264 | orchestrator | Wednesday 14 May 2025 14:41:22 +0000 (0:00:02.197) 0:10:48.871 ********* 2025-05-14 14:43:29.326269 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-14 14:43:29.326275 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326279 | orchestrator | 2025-05-14 14:43:29.326282 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-14 14:43:29.326286 | orchestrator | Wednesday 14 May 2025 14:41:22 +0000 (0:00:00.373) 0:10:49.244 ********* 2025-05-14 14:43:29.326291 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:43:29.326304 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:43:29.326308 | orchestrator | 2025-05-14 14:43:29.326312 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-14 14:43:29.326315 | orchestrator | Wednesday 14 May 2025 14:41:29 +0000 (0:00:06.792) 0:10:56.037 ********* 2025-05-14 14:43:29.326319 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:43:29.326323 | orchestrator | 2025-05-14 14:43:29.326327 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-14 14:43:29.326330 | orchestrator | Wednesday 14 May 2025 14:41:32 +0000 (0:00:03.167) 0:10:59.204 ********* 2025-05-14 14:43:29.326334 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.326338 | orchestrator | 2025-05-14 14:43:29.326341 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-14 14:43:29.326345 | orchestrator | Wednesday 14 May 2025 14:41:33 +0000 (0:00:00.748) 0:10:59.953 ********* 2025-05-14 14:43:29.326349 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 14:43:29.326353 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 14:43:29.326356 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 14:43:29.326360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-14 14:43:29.326364 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-14 14:43:29.326367 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-14 14:43:29.326371 | orchestrator | 2025-05-14 14:43:29.326378 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-14 14:43:29.326385 | orchestrator | Wednesday 14 May 2025 14:41:34 +0000 (0:00:01.106) 0:11:01.059 ********* 2025-05-14 14:43:29.326389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:43:29.326392 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.326396 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 14:43:29.326400 | orchestrator | 2025-05-14 14:43:29.326403 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-14 14:43:29.326407 | orchestrator | Wednesday 14 May 2025 14:41:36 +0000 (0:00:01.767) 0:11:02.826 ********* 2025-05-14 14:43:29.326411 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:43:29.326414 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.326418 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326422 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:43:29.326425 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.326429 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326433 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:43:29.326436 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.326440 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326444 | orchestrator | 2025-05-14 14:43:29.326447 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-14 14:43:29.326451 | orchestrator | Wednesday 14 May 2025 14:41:37 +0000 (0:00:01.289) 0:11:04.116 ********* 2025-05-14 14:43:29.326455 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326458 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326462 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326466 | orchestrator | 2025-05-14 14:43:29.326469 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-14 14:43:29.326473 | orchestrator | Wednesday 14 May 2025 14:41:38 +0000 (0:00:00.581) 0:11:04.697 ********* 2025-05-14 14:43:29.326480 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.326484 | orchestrator | 2025-05-14 14:43:29.326487 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-14 14:43:29.326491 | orchestrator | Wednesday 14 May 2025 14:41:38 +0000 (0:00:00.588) 0:11:05.286 ********* 2025-05-14 14:43:29.326495 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.326498 | orchestrator | 2025-05-14 14:43:29.326502 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-14 14:43:29.326506 | orchestrator | Wednesday 14 May 2025 14:41:39 +0000 (0:00:00.849) 0:11:06.135 ********* 2025-05-14 14:43:29.326510 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326513 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326517 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326521 | orchestrator | 2025-05-14 14:43:29.326524 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-14 14:43:29.326528 | orchestrator | Wednesday 14 May 2025 14:41:40 +0000 (0:00:01.320) 0:11:07.456 ********* 2025-05-14 14:43:29.326532 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326536 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326539 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326543 | orchestrator | 2025-05-14 14:43:29.326547 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-14 14:43:29.326550 | orchestrator | Wednesday 14 May 2025 14:41:42 +0000 (0:00:01.215) 0:11:08.671 ********* 2025-05-14 14:43:29.326554 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326557 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326561 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326565 | orchestrator | 2025-05-14 14:43:29.326568 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-14 14:43:29.326572 | orchestrator | Wednesday 14 May 2025 14:41:44 +0000 (0:00:02.074) 0:11:10.745 ********* 2025-05-14 14:43:29.326576 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326579 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326583 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326587 | orchestrator | 2025-05-14 14:43:29.326590 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-14 14:43:29.326594 | orchestrator | Wednesday 14 May 2025 14:41:46 +0000 (0:00:02.021) 0:11:12.767 ********* 2025-05-14 14:43:29.326598 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-14 14:43:29.326601 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-14 14:43:29.326629 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-14 14:43:29.326633 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326637 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326641 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326644 | orchestrator | 2025-05-14 14:43:29.326648 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.326652 | orchestrator | Wednesday 14 May 2025 14:42:03 +0000 (0:00:17.138) 0:11:29.905 ********* 2025-05-14 14:43:29.326656 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326659 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326663 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326667 | orchestrator | 2025-05-14 14:43:29.326670 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 14:43:29.326674 | orchestrator | Wednesday 14 May 2025 14:42:04 +0000 (0:00:00.827) 0:11:30.733 ********* 2025-05-14 14:43:29.326678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.326682 | orchestrator | 2025-05-14 14:43:29.326685 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-14 14:43:29.326693 | orchestrator | Wednesday 14 May 2025 14:42:05 +0000 (0:00:00.835) 0:11:31.568 ********* 2025-05-14 14:43:29.326699 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326703 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326707 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326711 | orchestrator | 2025-05-14 14:43:29.326716 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 14:43:29.326720 | orchestrator | Wednesday 14 May 2025 14:42:05 +0000 (0:00:00.345) 0:11:31.914 ********* 2025-05-14 14:43:29.326724 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326728 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326731 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326735 | orchestrator | 2025-05-14 14:43:29.326739 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-14 14:43:29.326743 | orchestrator | Wednesday 14 May 2025 14:42:06 +0000 (0:00:01.272) 0:11:33.186 ********* 2025-05-14 14:43:29.326746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.326750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.326754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.326757 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326761 | orchestrator | 2025-05-14 14:43:29.326764 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 14:43:29.326768 | orchestrator | Wednesday 14 May 2025 14:42:07 +0000 (0:00:01.238) 0:11:34.425 ********* 2025-05-14 14:43:29.326772 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326776 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326779 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326783 | orchestrator | 2025-05-14 14:43:29.326787 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.326790 | orchestrator | Wednesday 14 May 2025 14:42:08 +0000 (0:00:00.353) 0:11:34.779 ********* 2025-05-14 14:43:29.326794 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.326798 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.326801 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.326805 | orchestrator | 2025-05-14 14:43:29.326809 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 14:43:29.326812 | orchestrator | 2025-05-14 14:43:29.326816 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 14:43:29.326820 | orchestrator | Wednesday 14 May 2025 14:42:10 +0000 (0:00:02.209) 0:11:36.988 ********* 2025-05-14 14:43:29.326824 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.326827 | orchestrator | 2025-05-14 14:43:29.326831 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 14:43:29.326835 | orchestrator | Wednesday 14 May 2025 14:42:11 +0000 (0:00:00.779) 0:11:37.768 ********* 2025-05-14 14:43:29.326838 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326846 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326850 | orchestrator | 2025-05-14 14:43:29.326853 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 14:43:29.326857 | orchestrator | Wednesday 14 May 2025 14:42:11 +0000 (0:00:00.326) 0:11:38.094 ********* 2025-05-14 14:43:29.326861 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326865 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326868 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326872 | orchestrator | 2025-05-14 14:43:29.326876 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 14:43:29.326880 | orchestrator | Wednesday 14 May 2025 14:42:12 +0000 (0:00:00.720) 0:11:38.814 ********* 2025-05-14 14:43:29.326883 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326887 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326893 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326897 | orchestrator | 2025-05-14 14:43:29.326901 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 14:43:29.326904 | orchestrator | Wednesday 14 May 2025 14:42:13 +0000 (0:00:01.052) 0:11:39.867 ********* 2025-05-14 14:43:29.326908 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.326912 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.326915 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.326919 | orchestrator | 2025-05-14 14:43:29.326923 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 14:43:29.326926 | orchestrator | Wednesday 14 May 2025 14:42:14 +0000 (0:00:00.800) 0:11:40.668 ********* 2025-05-14 14:43:29.326930 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326934 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326941 | orchestrator | 2025-05-14 14:43:29.326945 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 14:43:29.326949 | orchestrator | Wednesday 14 May 2025 14:42:14 +0000 (0:00:00.316) 0:11:40.985 ********* 2025-05-14 14:43:29.326953 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326956 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326960 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326964 | orchestrator | 2025-05-14 14:43:29.326967 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 14:43:29.326971 | orchestrator | Wednesday 14 May 2025 14:42:14 +0000 (0:00:00.322) 0:11:41.307 ********* 2025-05-14 14:43:29.326975 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.326978 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.326982 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.326986 | orchestrator | 2025-05-14 14:43:29.326990 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 14:43:29.326993 | orchestrator | Wednesday 14 May 2025 14:42:15 +0000 (0:00:00.591) 0:11:41.899 ********* 2025-05-14 14:43:29.326997 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327001 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327004 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327008 | orchestrator | 2025-05-14 14:43:29.327012 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 14:43:29.327015 | orchestrator | Wednesday 14 May 2025 14:42:15 +0000 (0:00:00.328) 0:11:42.228 ********* 2025-05-14 14:43:29.327019 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327023 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327029 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327033 | orchestrator | 2025-05-14 14:43:29.327039 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 14:43:29.327043 | orchestrator | Wednesday 14 May 2025 14:42:16 +0000 (0:00:00.326) 0:11:42.555 ********* 2025-05-14 14:43:29.327046 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327050 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327054 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327057 | orchestrator | 2025-05-14 14:43:29.327061 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 14:43:29.327065 | orchestrator | Wednesday 14 May 2025 14:42:16 +0000 (0:00:00.316) 0:11:42.872 ********* 2025-05-14 14:43:29.327069 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.327072 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.327076 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.327080 | orchestrator | 2025-05-14 14:43:29.327083 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 14:43:29.327087 | orchestrator | Wednesday 14 May 2025 14:42:17 +0000 (0:00:01.025) 0:11:43.897 ********* 2025-05-14 14:43:29.327091 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327094 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327098 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327102 | orchestrator | 2025-05-14 14:43:29.327108 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 14:43:29.327112 | orchestrator | Wednesday 14 May 2025 14:42:17 +0000 (0:00:00.335) 0:11:44.232 ********* 2025-05-14 14:43:29.327116 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327119 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327123 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327127 | orchestrator | 2025-05-14 14:43:29.327130 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 14:43:29.327134 | orchestrator | Wednesday 14 May 2025 14:42:18 +0000 (0:00:00.324) 0:11:44.557 ********* 2025-05-14 14:43:29.327138 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.327141 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.327145 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.327149 | orchestrator | 2025-05-14 14:43:29.327152 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 14:43:29.327156 | orchestrator | Wednesday 14 May 2025 14:42:18 +0000 (0:00:00.334) 0:11:44.891 ********* 2025-05-14 14:43:29.327160 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.327164 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.327167 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.327171 | orchestrator | 2025-05-14 14:43:29.327174 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 14:43:29.327178 | orchestrator | Wednesday 14 May 2025 14:42:19 +0000 (0:00:00.716) 0:11:45.608 ********* 2025-05-14 14:43:29.327182 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.327185 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.327189 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.327193 | orchestrator | 2025-05-14 14:43:29.327196 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 14:43:29.327200 | orchestrator | Wednesday 14 May 2025 14:42:19 +0000 (0:00:00.347) 0:11:45.955 ********* 2025-05-14 14:43:29.327204 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327207 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327211 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327215 | orchestrator | 2025-05-14 14:43:29.327219 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 14:43:29.327222 | orchestrator | Wednesday 14 May 2025 14:42:19 +0000 (0:00:00.416) 0:11:46.371 ********* 2025-05-14 14:43:29.327226 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327230 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327233 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327237 | orchestrator | 2025-05-14 14:43:29.327241 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 14:43:29.327244 | orchestrator | Wednesday 14 May 2025 14:42:20 +0000 (0:00:00.315) 0:11:46.687 ********* 2025-05-14 14:43:29.327248 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327252 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327255 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327259 | orchestrator | 2025-05-14 14:43:29.327263 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 14:43:29.327267 | orchestrator | Wednesday 14 May 2025 14:42:20 +0000 (0:00:00.613) 0:11:47.300 ********* 2025-05-14 14:43:29.327270 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.327274 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.327278 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.327281 | orchestrator | 2025-05-14 14:43:29.327285 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 14:43:29.327289 | orchestrator | Wednesday 14 May 2025 14:42:21 +0000 (0:00:00.330) 0:11:47.630 ********* 2025-05-14 14:43:29.327293 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327296 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327300 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327304 | orchestrator | 2025-05-14 14:43:29.327308 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 14:43:29.327314 | orchestrator | Wednesday 14 May 2025 14:42:21 +0000 (0:00:00.345) 0:11:47.975 ********* 2025-05-14 14:43:29.327318 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327322 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327326 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327329 | orchestrator | 2025-05-14 14:43:29.327333 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 14:43:29.327337 | orchestrator | Wednesday 14 May 2025 14:42:21 +0000 (0:00:00.349) 0:11:48.324 ********* 2025-05-14 14:43:29.327340 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327344 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327348 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327351 | orchestrator | 2025-05-14 14:43:29.327355 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 14:43:29.327359 | orchestrator | Wednesday 14 May 2025 14:42:22 +0000 (0:00:00.682) 0:11:49.006 ********* 2025-05-14 14:43:29.327363 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327366 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327370 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327374 | orchestrator | 2025-05-14 14:43:29.327379 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 14:43:29.327386 | orchestrator | Wednesday 14 May 2025 14:42:22 +0000 (0:00:00.410) 0:11:49.417 ********* 2025-05-14 14:43:29.327390 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327394 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327397 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327401 | orchestrator | 2025-05-14 14:43:29.327405 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 14:43:29.327408 | orchestrator | Wednesday 14 May 2025 14:42:23 +0000 (0:00:00.383) 0:11:49.800 ********* 2025-05-14 14:43:29.327412 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327416 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327419 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327423 | orchestrator | 2025-05-14 14:43:29.327427 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 14:43:29.327430 | orchestrator | Wednesday 14 May 2025 14:42:23 +0000 (0:00:00.324) 0:11:50.125 ********* 2025-05-14 14:43:29.327434 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327438 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327441 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327445 | orchestrator | 2025-05-14 14:43:29.327449 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 14:43:29.327453 | orchestrator | Wednesday 14 May 2025 14:42:24 +0000 (0:00:00.626) 0:11:50.751 ********* 2025-05-14 14:43:29.327456 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327460 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327463 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327467 | orchestrator | 2025-05-14 14:43:29.327471 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 14:43:29.327475 | orchestrator | Wednesday 14 May 2025 14:42:24 +0000 (0:00:00.365) 0:11:51.117 ********* 2025-05-14 14:43:29.327478 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327482 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327486 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327489 | orchestrator | 2025-05-14 14:43:29.327493 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 14:43:29.327497 | orchestrator | Wednesday 14 May 2025 14:42:24 +0000 (0:00:00.333) 0:11:51.450 ********* 2025-05-14 14:43:29.327500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327504 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327508 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327512 | orchestrator | 2025-05-14 14:43:29.327515 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 14:43:29.327522 | orchestrator | Wednesday 14 May 2025 14:42:25 +0000 (0:00:00.323) 0:11:51.774 ********* 2025-05-14 14:43:29.327526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327529 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327533 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327536 | orchestrator | 2025-05-14 14:43:29.327540 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 14:43:29.327544 | orchestrator | Wednesday 14 May 2025 14:42:25 +0000 (0:00:00.599) 0:11:52.373 ********* 2025-05-14 14:43:29.327547 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327551 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327555 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327558 | orchestrator | 2025-05-14 14:43:29.327562 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 14:43:29.327566 | orchestrator | Wednesday 14 May 2025 14:42:26 +0000 (0:00:00.340) 0:11:52.714 ********* 2025-05-14 14:43:29.327569 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.327573 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 14:43:29.327577 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327580 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.327584 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 14:43:29.327588 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327591 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.327595 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 14:43:29.327598 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327602 | orchestrator | 2025-05-14 14:43:29.327617 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 14:43:29.327622 | orchestrator | Wednesday 14 May 2025 14:42:26 +0000 (0:00:00.361) 0:11:53.075 ********* 2025-05-14 14:43:29.327625 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 14:43:29.327629 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 14:43:29.327633 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327636 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 14:43:29.327640 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 14:43:29.327644 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327647 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 14:43:29.327651 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 14:43:29.327654 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327658 | orchestrator | 2025-05-14 14:43:29.327662 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 14:43:29.327666 | orchestrator | Wednesday 14 May 2025 14:42:26 +0000 (0:00:00.353) 0:11:53.428 ********* 2025-05-14 14:43:29.327669 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327673 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327676 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327680 | orchestrator | 2025-05-14 14:43:29.327684 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 14:43:29.327687 | orchestrator | Wednesday 14 May 2025 14:42:27 +0000 (0:00:00.596) 0:11:54.025 ********* 2025-05-14 14:43:29.327691 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327695 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327701 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327705 | orchestrator | 2025-05-14 14:43:29.327711 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:43:29.327715 | orchestrator | Wednesday 14 May 2025 14:42:27 +0000 (0:00:00.330) 0:11:54.355 ********* 2025-05-14 14:43:29.327718 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327722 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327730 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327734 | orchestrator | 2025-05-14 14:43:29.327737 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:43:29.327741 | orchestrator | Wednesday 14 May 2025 14:42:28 +0000 (0:00:00.324) 0:11:54.679 ********* 2025-05-14 14:43:29.327745 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327748 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327752 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327756 | orchestrator | 2025-05-14 14:43:29.327759 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:43:29.327763 | orchestrator | Wednesday 14 May 2025 14:42:28 +0000 (0:00:00.327) 0:11:55.007 ********* 2025-05-14 14:43:29.327767 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327770 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327774 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327778 | orchestrator | 2025-05-14 14:43:29.327781 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:43:29.327785 | orchestrator | Wednesday 14 May 2025 14:42:29 +0000 (0:00:00.591) 0:11:55.598 ********* 2025-05-14 14:43:29.327789 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327792 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327796 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327800 | orchestrator | 2025-05-14 14:43:29.327803 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:43:29.327807 | orchestrator | Wednesday 14 May 2025 14:42:29 +0000 (0:00:00.332) 0:11:55.931 ********* 2025-05-14 14:43:29.327811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.327814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.327818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.327821 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327825 | orchestrator | 2025-05-14 14:43:29.327829 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:43:29.327833 | orchestrator | Wednesday 14 May 2025 14:42:29 +0000 (0:00:00.431) 0:11:56.362 ********* 2025-05-14 14:43:29.327836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.327840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.327843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.327847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327851 | orchestrator | 2025-05-14 14:43:29.327854 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:43:29.327858 | orchestrator | Wednesday 14 May 2025 14:42:30 +0000 (0:00:00.415) 0:11:56.777 ********* 2025-05-14 14:43:29.327862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.327865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.327869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.327873 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327876 | orchestrator | 2025-05-14 14:43:29.327880 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.327884 | orchestrator | Wednesday 14 May 2025 14:42:30 +0000 (0:00:00.428) 0:11:57.206 ********* 2025-05-14 14:43:29.327887 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327891 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327895 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327898 | orchestrator | 2025-05-14 14:43:29.327902 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:43:29.327906 | orchestrator | Wednesday 14 May 2025 14:42:31 +0000 (0:00:00.334) 0:11:57.540 ********* 2025-05-14 14:43:29.327909 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.327913 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327919 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.327923 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327926 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.327930 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327934 | orchestrator | 2025-05-14 14:43:29.327937 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:43:29.327941 | orchestrator | Wednesday 14 May 2025 14:42:31 +0000 (0:00:00.806) 0:11:58.347 ********* 2025-05-14 14:43:29.327945 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327948 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327952 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327956 | orchestrator | 2025-05-14 14:43:29.327960 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:43:29.327963 | orchestrator | Wednesday 14 May 2025 14:42:32 +0000 (0:00:00.332) 0:11:58.679 ********* 2025-05-14 14:43:29.327967 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327971 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.327974 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.327978 | orchestrator | 2025-05-14 14:43:29.327982 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:43:29.327985 | orchestrator | Wednesday 14 May 2025 14:42:32 +0000 (0:00:00.328) 0:11:59.007 ********* 2025-05-14 14:43:29.327989 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:43:29.327993 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.327996 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:43:29.328000 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328004 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:43:29.328007 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328011 | orchestrator | 2025-05-14 14:43:29.328017 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:43:29.328023 | orchestrator | Wednesday 14 May 2025 14:42:33 +0000 (0:00:00.721) 0:11:59.729 ********* 2025-05-14 14:43:29.328027 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.328030 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328034 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.328038 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328042 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:43:29.328045 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328049 | orchestrator | 2025-05-14 14:43:29.328053 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:43:29.328057 | orchestrator | Wednesday 14 May 2025 14:42:33 +0000 (0:00:00.356) 0:12:00.085 ********* 2025-05-14 14:43:29.328060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.328064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.328068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.328071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:43:29.328075 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:43:29.328079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:43:29.328082 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328086 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:43:29.328093 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:43:29.328097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:43:29.328100 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328107 | orchestrator | 2025-05-14 14:43:29.328110 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 14:43:29.328114 | orchestrator | Wednesday 14 May 2025 14:42:34 +0000 (0:00:00.601) 0:12:00.686 ********* 2025-05-14 14:43:29.328118 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328121 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328125 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328129 | orchestrator | 2025-05-14 14:43:29.328132 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 14:43:29.328136 | orchestrator | Wednesday 14 May 2025 14:42:34 +0000 (0:00:00.803) 0:12:01.490 ********* 2025-05-14 14:43:29.328140 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.328143 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328147 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.328151 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328154 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.328158 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328162 | orchestrator | 2025-05-14 14:43:29.328165 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 14:43:29.328169 | orchestrator | Wednesday 14 May 2025 14:42:35 +0000 (0:00:00.574) 0:12:02.064 ********* 2025-05-14 14:43:29.328173 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328176 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328180 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328184 | orchestrator | 2025-05-14 14:43:29.328187 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 14:43:29.328191 | orchestrator | Wednesday 14 May 2025 14:42:36 +0000 (0:00:00.826) 0:12:02.891 ********* 2025-05-14 14:43:29.328195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328198 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328202 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328206 | orchestrator | 2025-05-14 14:43:29.328209 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-14 14:43:29.328213 | orchestrator | Wednesday 14 May 2025 14:42:36 +0000 (0:00:00.562) 0:12:03.453 ********* 2025-05-14 14:43:29.328217 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.328220 | orchestrator | 2025-05-14 14:43:29.328224 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-14 14:43:29.328228 | orchestrator | Wednesday 14 May 2025 14:42:37 +0000 (0:00:00.821) 0:12:04.274 ********* 2025-05-14 14:43:29.328231 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 14:43:29.328235 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 14:43:29.328239 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 14:43:29.328242 | orchestrator | 2025-05-14 14:43:29.328246 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-14 14:43:29.328250 | orchestrator | Wednesday 14 May 2025 14:42:38 +0000 (0:00:00.683) 0:12:04.958 ********* 2025-05-14 14:43:29.328253 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:43:29.328257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.328261 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 14:43:29.328264 | orchestrator | 2025-05-14 14:43:29.328268 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-14 14:43:29.328272 | orchestrator | Wednesday 14 May 2025 14:42:40 +0000 (0:00:01.871) 0:12:06.830 ********* 2025-05-14 14:43:29.328276 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:43:29.328279 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 14:43:29.328283 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328289 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:43:29.328296 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 14:43:29.328302 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328306 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:43:29.328309 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 14:43:29.328313 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328317 | orchestrator | 2025-05-14 14:43:29.328320 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-14 14:43:29.328324 | orchestrator | Wednesday 14 May 2025 14:42:41 +0000 (0:00:01.618) 0:12:08.448 ********* 2025-05-14 14:43:29.328328 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328331 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328335 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328339 | orchestrator | 2025-05-14 14:43:29.328342 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-14 14:43:29.328346 | orchestrator | Wednesday 14 May 2025 14:42:42 +0000 (0:00:00.418) 0:12:08.867 ********* 2025-05-14 14:43:29.328350 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328353 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328357 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328361 | orchestrator | 2025-05-14 14:43:29.328364 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-14 14:43:29.328368 | orchestrator | Wednesday 14 May 2025 14:42:42 +0000 (0:00:00.326) 0:12:09.193 ********* 2025-05-14 14:43:29.328372 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-14 14:43:29.328375 | orchestrator | 2025-05-14 14:43:29.328379 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-14 14:43:29.328383 | orchestrator | Wednesday 14 May 2025 14:42:42 +0000 (0:00:00.233) 0:12:09.426 ********* 2025-05-14 14:43:29.328387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328406 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328410 | orchestrator | 2025-05-14 14:43:29.328413 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-14 14:43:29.328417 | orchestrator | Wednesday 14 May 2025 14:42:44 +0000 (0:00:01.156) 0:12:10.582 ********* 2025-05-14 14:43:29.328421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328440 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328443 | orchestrator | 2025-05-14 14:43:29.328447 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-14 14:43:29.328453 | orchestrator | Wednesday 14 May 2025 14:42:44 +0000 (0:00:00.794) 0:12:11.377 ********* 2025-05-14 14:43:29.328457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 14:43:29.328476 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328479 | orchestrator | 2025-05-14 14:43:29.328483 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-14 14:43:29.328487 | orchestrator | Wednesday 14 May 2025 14:42:45 +0000 (0:00:00.681) 0:12:12.058 ********* 2025-05-14 14:43:29.328490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 14:43:29.328500 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 14:43:29.328504 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 14:43:29.328507 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 14:43:29.328511 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 14:43:29.328515 | orchestrator | 2025-05-14 14:43:29.328518 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-14 14:43:29.328522 | orchestrator | Wednesday 14 May 2025 14:43:10 +0000 (0:00:25.226) 0:12:37.285 ********* 2025-05-14 14:43:29.328526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328530 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328533 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328537 | orchestrator | 2025-05-14 14:43:29.328540 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-14 14:43:29.328544 | orchestrator | Wednesday 14 May 2025 14:43:11 +0000 (0:00:00.510) 0:12:37.795 ********* 2025-05-14 14:43:29.328548 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328552 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328555 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328559 | orchestrator | 2025-05-14 14:43:29.328563 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-14 14:43:29.328566 | orchestrator | Wednesday 14 May 2025 14:43:11 +0000 (0:00:00.372) 0:12:38.168 ********* 2025-05-14 14:43:29.328570 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.328574 | orchestrator | 2025-05-14 14:43:29.328578 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-14 14:43:29.328581 | orchestrator | Wednesday 14 May 2025 14:43:12 +0000 (0:00:00.575) 0:12:38.743 ********* 2025-05-14 14:43:29.328585 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.328589 | orchestrator | 2025-05-14 14:43:29.328592 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-14 14:43:29.328596 | orchestrator | Wednesday 14 May 2025 14:43:13 +0000 (0:00:00.829) 0:12:39.572 ********* 2025-05-14 14:43:29.328603 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328616 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328620 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328624 | orchestrator | 2025-05-14 14:43:29.328627 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-14 14:43:29.328631 | orchestrator | Wednesday 14 May 2025 14:43:14 +0000 (0:00:01.187) 0:12:40.759 ********* 2025-05-14 14:43:29.328635 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328638 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328642 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328646 | orchestrator | 2025-05-14 14:43:29.328650 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-14 14:43:29.328653 | orchestrator | Wednesday 14 May 2025 14:43:15 +0000 (0:00:01.135) 0:12:41.895 ********* 2025-05-14 14:43:29.328657 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328661 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328664 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328668 | orchestrator | 2025-05-14 14:43:29.328672 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-14 14:43:29.328676 | orchestrator | Wednesday 14 May 2025 14:43:17 +0000 (0:00:01.919) 0:12:43.814 ********* 2025-05-14 14:43:29.328679 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.328683 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.328687 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 14:43:29.328691 | orchestrator | 2025-05-14 14:43:29.328694 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-14 14:43:29.328698 | orchestrator | Wednesday 14 May 2025 14:43:19 +0000 (0:00:01.923) 0:12:45.738 ********* 2025-05-14 14:43:29.328702 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328706 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:43:29.328709 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:43:29.328713 | orchestrator | 2025-05-14 14:43:29.328717 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 14:43:29.328720 | orchestrator | Wednesday 14 May 2025 14:43:20 +0000 (0:00:01.156) 0:12:46.894 ********* 2025-05-14 14:43:29.328724 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328728 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328731 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328735 | orchestrator | 2025-05-14 14:43:29.328739 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 14:43:29.328743 | orchestrator | Wednesday 14 May 2025 14:43:21 +0000 (0:00:00.693) 0:12:47.588 ********* 2025-05-14 14:43:29.328746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:43:29.328750 | orchestrator | 2025-05-14 14:43:29.328754 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 14:43:29.328760 | orchestrator | Wednesday 14 May 2025 14:43:21 +0000 (0:00:00.733) 0:12:48.321 ********* 2025-05-14 14:43:29.328766 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.328770 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.328774 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.328778 | orchestrator | 2025-05-14 14:43:29.328781 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 14:43:29.328785 | orchestrator | Wednesday 14 May 2025 14:43:22 +0000 (0:00:00.334) 0:12:48.655 ********* 2025-05-14 14:43:29.328789 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328792 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328796 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328802 | orchestrator | 2025-05-14 14:43:29.328806 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 14:43:29.328810 | orchestrator | Wednesday 14 May 2025 14:43:23 +0000 (0:00:01.210) 0:12:49.866 ********* 2025-05-14 14:43:29.328814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:43:29.328817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:43:29.328821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:43:29.328825 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:43:29.328828 | orchestrator | 2025-05-14 14:43:29.328832 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 14:43:29.328836 | orchestrator | Wednesday 14 May 2025 14:43:24 +0000 (0:00:00.954) 0:12:50.821 ********* 2025-05-14 14:43:29.328839 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:43:29.328843 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:43:29.328847 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:43:29.328851 | orchestrator | 2025-05-14 14:43:29.328854 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 14:43:29.328858 | orchestrator | Wednesday 14 May 2025 14:43:24 +0000 (0:00:00.322) 0:12:51.143 ********* 2025-05-14 14:43:29.328862 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:43:29.328865 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:43:29.328869 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:43:29.328873 | orchestrator | 2025-05-14 14:43:29.328876 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:43:29.328880 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-14 14:43:29.328884 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-14 14:43:29.328888 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-14 14:43:29.328892 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-14 14:43:29.328895 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-14 14:43:29.328899 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-14 14:43:29.328903 | orchestrator | 2025-05-14 14:43:29.328907 | orchestrator | 2025-05-14 14:43:29.328910 | orchestrator | 2025-05-14 14:43:29.328914 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:43:29.328918 | orchestrator | Wednesday 14 May 2025 14:43:26 +0000 (0:00:01.400) 0:12:52.544 ********* 2025-05-14 14:43:29.328922 | orchestrator | =============================================================================== 2025-05-14 14:43:29.328925 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 42.68s 2025-05-14 14:43:29.328929 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.62s 2025-05-14 14:43:29.328933 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.23s 2025-05-14 14:43:29.328937 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.67s 2025-05-14 14:43:29.328940 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.14s 2025-05-14 14:43:29.328944 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.51s 2025-05-14 14:43:29.328948 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.64s 2025-05-14 14:43:29.328951 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.38s 2025-05-14 14:43:29.328958 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.20s 2025-05-14 14:43:29.328961 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.79s 2025-05-14 14:43:29.328965 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.38s 2025-05-14 14:43:29.328969 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.08s 2025-05-14 14:43:29.328972 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.51s 2025-05-14 14:43:29.328976 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.71s 2025-05-14 14:43:29.328980 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.61s 2025-05-14 14:43:29.328984 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.50s 2025-05-14 14:43:29.328987 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.30s 2025-05-14 14:43:29.328993 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.80s 2025-05-14 14:43:29.328999 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.48s 2025-05-14 14:43:29.329003 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.45s 2025-05-14 14:43:29.329007 | orchestrator | 2025-05-14 14:43:29 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:29.329011 | orchestrator | 2025-05-14 14:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:32.351110 | orchestrator | 2025-05-14 14:43:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:32.352575 | orchestrator | 2025-05-14 14:43:32 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:32.354207 | orchestrator | 2025-05-14 14:43:32 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:32.354258 | orchestrator | 2025-05-14 14:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:35.401259 | orchestrator | 2025-05-14 14:43:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:35.401669 | orchestrator | 2025-05-14 14:43:35 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:35.403912 | orchestrator | 2025-05-14 14:43:35 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:35.404051 | orchestrator | 2025-05-14 14:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:38.448692 | orchestrator | 2025-05-14 14:43:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:38.449912 | orchestrator | 2025-05-14 14:43:38 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:38.452170 | orchestrator | 2025-05-14 14:43:38 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:38.452425 | orchestrator | 2025-05-14 14:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:41.496002 | orchestrator | 2025-05-14 14:43:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:41.496624 | orchestrator | 2025-05-14 14:43:41 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state STARTED 2025-05-14 14:43:41.497902 | orchestrator | 2025-05-14 14:43:41 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:41.497955 | orchestrator | 2025-05-14 14:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:44.559562 | orchestrator | 2025-05-14 14:43:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:44.562367 | orchestrator | 2025-05-14 14:43:44 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:44.570542 | orchestrator | 2025-05-14 14:43:44.570585 | orchestrator | 2025-05-14 14:43:44.570598 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-14 14:43:44.570610 | orchestrator | 2025-05-14 14:43:44.570621 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 14:43:44.570632 | orchestrator | Wednesday 14 May 2025 14:40:12 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-05-14 14:43:44.570666 | orchestrator | ok: [localhost] => { 2025-05-14 14:43:44.570680 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-14 14:43:44.570691 | orchestrator | } 2025-05-14 14:43:44.570702 | orchestrator | 2025-05-14 14:43:44.570713 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-14 14:43:44.570724 | orchestrator | Wednesday 14 May 2025 14:40:12 +0000 (0:00:00.042) 0:00:00.211 ********* 2025-05-14 14:43:44.570736 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-14 14:43:44.570749 | orchestrator | ...ignoring 2025-05-14 14:43:44.570760 | orchestrator | 2025-05-14 14:43:44.570771 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-14 14:43:44.570781 | orchestrator | Wednesday 14 May 2025 14:40:15 +0000 (0:00:02.615) 0:00:02.827 ********* 2025-05-14 14:43:44.570792 | orchestrator | skipping: [localhost] 2025-05-14 14:43:44.570803 | orchestrator | 2025-05-14 14:43:44.570837 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-14 14:43:44.570850 | orchestrator | Wednesday 14 May 2025 14:40:15 +0000 (0:00:00.095) 0:00:02.922 ********* 2025-05-14 14:43:44.570860 | orchestrator | ok: [localhost] 2025-05-14 14:43:44.570871 | orchestrator | 2025-05-14 14:43:44.570882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:43:44.570893 | orchestrator | 2025-05-14 14:43:44.570904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:43:44.570914 | orchestrator | Wednesday 14 May 2025 14:40:15 +0000 (0:00:00.145) 0:00:03.068 ********* 2025-05-14 14:43:44.570925 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.570936 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.570946 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.570957 | orchestrator | 2025-05-14 14:43:44.570968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:43:44.570978 | orchestrator | Wednesday 14 May 2025 14:40:15 +0000 (0:00:00.572) 0:00:03.641 ********* 2025-05-14 14:43:44.570989 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 14:43:44.571015 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 14:43:44.571026 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 14:43:44.571037 | orchestrator | 2025-05-14 14:43:44.571047 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 14:43:44.571058 | orchestrator | 2025-05-14 14:43:44.571069 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 14:43:44.571079 | orchestrator | Wednesday 14 May 2025 14:40:16 +0000 (0:00:00.396) 0:00:04.038 ********* 2025-05-14 14:43:44.571090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:43:44.571101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 14:43:44.571111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 14:43:44.571122 | orchestrator | 2025-05-14 14:43:44.571135 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 14:43:44.571147 | orchestrator | Wednesday 14 May 2025 14:40:16 +0000 (0:00:00.641) 0:00:04.680 ********* 2025-05-14 14:43:44.571159 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:44.571172 | orchestrator | 2025-05-14 14:43:44.571184 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-14 14:43:44.571210 | orchestrator | Wednesday 14 May 2025 14:40:17 +0000 (0:00:00.610) 0:00:05.290 ********* 2025-05-14 14:43:44.571246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571363 | orchestrator | 2025-05-14 14:43:44.571374 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-14 14:43:44.571386 | orchestrator | Wednesday 14 May 2025 14:40:21 +0000 (0:00:04.329) 0:00:09.619 ********* 2025-05-14 14:43:44.571397 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.571408 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.571419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.571430 | orchestrator | 2025-05-14 14:43:44.571447 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-14 14:43:44.571472 | orchestrator | Wednesday 14 May 2025 14:40:22 +0000 (0:00:00.869) 0:00:10.489 ********* 2025-05-14 14:43:44.571495 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.571507 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.571517 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.571528 | orchestrator | 2025-05-14 14:43:44.571538 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-14 14:43:44.571549 | orchestrator | Wednesday 14 May 2025 14:40:24 +0000 (0:00:01.428) 0:00:11.917 ********* 2025-05-14 14:43:44.571571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571736 | orchestrator | 2025-05-14 14:43:44.571747 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-14 14:43:44.571759 | orchestrator | Wednesday 14 May 2025 14:40:29 +0000 (0:00:05.170) 0:00:17.088 ********* 2025-05-14 14:43:44.571769 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.571780 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.571791 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.571802 | orchestrator | 2025-05-14 14:43:44.571812 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-14 14:43:44.571823 | orchestrator | Wednesday 14 May 2025 14:40:30 +0000 (0:00:01.058) 0:00:18.146 ********* 2025-05-14 14:43:44.571834 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.571844 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:44.571855 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:44.571865 | orchestrator | 2025-05-14 14:43:44.571875 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-14 14:43:44.571886 | orchestrator | Wednesday 14 May 2025 14:40:38 +0000 (0:00:08.334) 0:00:26.481 ********* 2025-05-14 14:43:44.571907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 14:43:44.571966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.571996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 14:43:44.572020 | orchestrator | 2025-05-14 14:43:44.572031 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-14 14:43:44.572042 | orchestrator | Wednesday 14 May 2025 14:40:43 +0000 (0:00:04.931) 0:00:31.412 ********* 2025-05-14 14:43:44.572053 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.572064 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:44.572074 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:44.572085 | orchestrator | 2025-05-14 14:43:44.572096 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-14 14:43:44.572106 | orchestrator | Wednesday 14 May 2025 14:40:44 +0000 (0:00:01.223) 0:00:32.636 ********* 2025-05-14 14:43:44.572117 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.572128 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.572138 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.572149 | orchestrator | 2025-05-14 14:43:44.572160 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-14 14:43:44.572170 | orchestrator | Wednesday 14 May 2025 14:40:45 +0000 (0:00:00.530) 0:00:33.166 ********* 2025-05-14 14:43:44.572181 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.572192 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.572202 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.572212 | orchestrator | 2025-05-14 14:43:44.572223 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-14 14:43:44.572234 | orchestrator | Wednesday 14 May 2025 14:40:45 +0000 (0:00:00.339) 0:00:33.505 ********* 2025-05-14 14:43:44.572246 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-14 14:43:44.572257 | orchestrator | ...ignoring 2025-05-14 14:43:44.572268 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-14 14:43:44.572279 | orchestrator | ...ignoring 2025-05-14 14:43:44.572289 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-14 14:43:44.572301 | orchestrator | ...ignoring 2025-05-14 14:43:44.572311 | orchestrator | 2025-05-14 14:43:44.572322 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-14 14:43:44.572333 | orchestrator | Wednesday 14 May 2025 14:40:56 +0000 (0:00:11.044) 0:00:44.550 ********* 2025-05-14 14:43:44.572344 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.572355 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.572366 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.572376 | orchestrator | 2025-05-14 14:43:44.572387 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-14 14:43:44.572397 | orchestrator | Wednesday 14 May 2025 14:40:57 +0000 (0:00:00.585) 0:00:45.136 ********* 2025-05-14 14:43:44.572408 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.572419 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572430 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572441 | orchestrator | 2025-05-14 14:43:44.572452 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-14 14:43:44.572462 | orchestrator | Wednesday 14 May 2025 14:40:57 +0000 (0:00:00.529) 0:00:45.665 ********* 2025-05-14 14:43:44.572473 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.572490 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572500 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572511 | orchestrator | 2025-05-14 14:43:44.572528 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-14 14:43:44.572539 | orchestrator | Wednesday 14 May 2025 14:40:58 +0000 (0:00:00.400) 0:00:46.065 ********* 2025-05-14 14:43:44.572550 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.572561 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572572 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572582 | orchestrator | 2025-05-14 14:43:44.572593 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-14 14:43:44.572603 | orchestrator | Wednesday 14 May 2025 14:40:58 +0000 (0:00:00.579) 0:00:46.645 ********* 2025-05-14 14:43:44.572614 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.572624 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.572635 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.572682 | orchestrator | 2025-05-14 14:43:44.572693 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-14 14:43:44.572704 | orchestrator | Wednesday 14 May 2025 14:40:59 +0000 (0:00:00.595) 0:00:47.240 ********* 2025-05-14 14:43:44.572715 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.572726 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572736 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572747 | orchestrator | 2025-05-14 14:43:44.572757 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 14:43:44.572768 | orchestrator | Wednesday 14 May 2025 14:40:59 +0000 (0:00:00.526) 0:00:47.767 ********* 2025-05-14 14:43:44.572779 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572789 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572800 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-14 14:43:44.572810 | orchestrator | 2025-05-14 14:43:44.572821 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-14 14:43:44.572832 | orchestrator | Wednesday 14 May 2025 14:41:00 +0000 (0:00:00.495) 0:00:48.262 ********* 2025-05-14 14:43:44.572842 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.572853 | orchestrator | 2025-05-14 14:43:44.572864 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-14 14:43:44.572874 | orchestrator | Wednesday 14 May 2025 14:41:11 +0000 (0:00:10.896) 0:00:59.159 ********* 2025-05-14 14:43:44.572885 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.572896 | orchestrator | 2025-05-14 14:43:44.572906 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 14:43:44.572917 | orchestrator | Wednesday 14 May 2025 14:41:11 +0000 (0:00:00.124) 0:00:59.283 ********* 2025-05-14 14:43:44.572928 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.572944 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.572954 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.572965 | orchestrator | 2025-05-14 14:43:44.572976 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-14 14:43:44.572987 | orchestrator | Wednesday 14 May 2025 14:41:12 +0000 (0:00:01.341) 0:01:00.625 ********* 2025-05-14 14:43:44.572997 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.573008 | orchestrator | 2025-05-14 14:43:44.573018 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-14 14:43:44.573029 | orchestrator | Wednesday 14 May 2025 14:41:22 +0000 (0:00:10.039) 0:01:10.665 ********* 2025-05-14 14:43:44.573040 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-14 14:43:44.573051 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.573061 | orchestrator | 2025-05-14 14:43:44.573072 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-14 14:43:44.573083 | orchestrator | Wednesday 14 May 2025 14:41:30 +0000 (0:00:07.260) 0:01:17.925 ********* 2025-05-14 14:43:44.573094 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.573111 | orchestrator | 2025-05-14 14:43:44.573122 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-14 14:43:44.573133 | orchestrator | Wednesday 14 May 2025 14:41:32 +0000 (0:00:02.662) 0:01:20.588 ********* 2025-05-14 14:43:44.573144 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.573154 | orchestrator | 2025-05-14 14:43:44.573165 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-14 14:43:44.573176 | orchestrator | Wednesday 14 May 2025 14:41:32 +0000 (0:00:00.123) 0:01:20.711 ********* 2025-05-14 14:43:44.573187 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.573197 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.573208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.573218 | orchestrator | 2025-05-14 14:43:44.573230 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-14 14:43:44.573249 | orchestrator | Wednesday 14 May 2025 14:41:33 +0000 (0:00:00.493) 0:01:21.205 ********* 2025-05-14 14:43:44.573269 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.573292 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:44.573320 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:44.573337 | orchestrator | 2025-05-14 14:43:44.573356 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-14 14:43:44.573374 | orchestrator | Wednesday 14 May 2025 14:41:33 +0000 (0:00:00.452) 0:01:21.657 ********* 2025-05-14 14:43:44.573394 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 14:43:44.573412 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.573432 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:44.573456 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:44.573482 | orchestrator | 2025-05-14 14:43:44.573500 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 14:43:44.573517 | orchestrator | skipping: no hosts matched 2025-05-14 14:43:44.573535 | orchestrator | 2025-05-14 14:43:44.573552 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 14:43:44.573568 | orchestrator | 2025-05-14 14:43:44.573587 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 14:43:44.573604 | orchestrator | Wednesday 14 May 2025 14:41:49 +0000 (0:00:16.071) 0:01:37.729 ********* 2025-05-14 14:43:44.573621 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:43:44.573665 | orchestrator | 2025-05-14 14:43:44.573696 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 14:43:44.573715 | orchestrator | Wednesday 14 May 2025 14:42:06 +0000 (0:00:16.418) 0:01:54.148 ********* 2025-05-14 14:43:44.573732 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.573749 | orchestrator | 2025-05-14 14:43:44.573766 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 14:43:44.573784 | orchestrator | Wednesday 14 May 2025 14:42:26 +0000 (0:00:20.560) 0:02:14.708 ********* 2025-05-14 14:43:44.573802 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.573819 | orchestrator | 2025-05-14 14:43:44.573835 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 14:43:44.573852 | orchestrator | 2025-05-14 14:43:44.573869 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 14:43:44.573887 | orchestrator | Wednesday 14 May 2025 14:42:29 +0000 (0:00:02.464) 0:02:17.172 ********* 2025-05-14 14:43:44.573906 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:43:44.573924 | orchestrator | 2025-05-14 14:43:44.573943 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 14:43:44.573957 | orchestrator | Wednesday 14 May 2025 14:42:45 +0000 (0:00:15.735) 0:02:32.908 ********* 2025-05-14 14:43:44.573967 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.573978 | orchestrator | 2025-05-14 14:43:44.573989 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 14:43:44.573999 | orchestrator | Wednesday 14 May 2025 14:43:05 +0000 (0:00:20.575) 0:02:53.484 ********* 2025-05-14 14:43:44.574083 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.574098 | orchestrator | 2025-05-14 14:43:44.574109 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 14:43:44.574119 | orchestrator | 2025-05-14 14:43:44.574129 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 14:43:44.574140 | orchestrator | Wednesday 14 May 2025 14:43:08 +0000 (0:00:02.552) 0:02:56.036 ********* 2025-05-14 14:43:44.574151 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.574161 | orchestrator | 2025-05-14 14:43:44.574172 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 14:43:44.574183 | orchestrator | Wednesday 14 May 2025 14:43:21 +0000 (0:00:13.307) 0:03:09.344 ********* 2025-05-14 14:43:44.574193 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.574204 | orchestrator | 2025-05-14 14:43:44.574214 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 14:43:44.574225 | orchestrator | Wednesday 14 May 2025 14:43:26 +0000 (0:00:04.561) 0:03:13.905 ********* 2025-05-14 14:43:44.574236 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.574246 | orchestrator | 2025-05-14 14:43:44.574264 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 14:43:44.574275 | orchestrator | 2025-05-14 14:43:44.574286 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 14:43:44.574297 | orchestrator | Wednesday 14 May 2025 14:43:28 +0000 (0:00:02.576) 0:03:16.482 ********* 2025-05-14 14:43:44.574307 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:43:44.574318 | orchestrator | 2025-05-14 14:43:44.574329 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-14 14:43:44.574339 | orchestrator | Wednesday 14 May 2025 14:43:29 +0000 (0:00:00.701) 0:03:17.183 ********* 2025-05-14 14:43:44.574350 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.574360 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.574371 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.574381 | orchestrator | 2025-05-14 14:43:44.574392 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-14 14:43:44.574402 | orchestrator | Wednesday 14 May 2025 14:43:32 +0000 (0:00:02.646) 0:03:19.830 ********* 2025-05-14 14:43:44.574413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.574423 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.574434 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.574444 | orchestrator | 2025-05-14 14:43:44.574455 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-14 14:43:44.574466 | orchestrator | Wednesday 14 May 2025 14:43:34 +0000 (0:00:02.200) 0:03:22.030 ********* 2025-05-14 14:43:44.574476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.574487 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.574497 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.574508 | orchestrator | 2025-05-14 14:43:44.574518 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-14 14:43:44.574529 | orchestrator | Wednesday 14 May 2025 14:43:36 +0000 (0:00:02.412) 0:03:24.443 ********* 2025-05-14 14:43:44.574540 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.574550 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.574561 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:43:44.574571 | orchestrator | 2025-05-14 14:43:44.574582 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-14 14:43:44.574592 | orchestrator | Wednesday 14 May 2025 14:43:38 +0000 (0:00:02.227) 0:03:26.670 ********* 2025-05-14 14:43:44.574603 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:43:44.574614 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:43:44.574625 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:43:44.574635 | orchestrator | 2025-05-14 14:43:44.574678 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 14:43:44.574697 | orchestrator | Wednesday 14 May 2025 14:43:42 +0000 (0:00:03.419) 0:03:30.090 ********* 2025-05-14 14:43:44.574726 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:43:44.574744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:43:44.574760 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:43:44.574777 | orchestrator | 2025-05-14 14:43:44.574797 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:43:44.574815 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 14:43:44.574833 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-14 14:43:44.574861 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 14:43:44.574873 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 14:43:44.574884 | orchestrator | 2025-05-14 14:43:44.574894 | orchestrator | 2025-05-14 14:43:44.574905 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:43:44.574916 | orchestrator | Wednesday 14 May 2025 14:43:42 +0000 (0:00:00.391) 0:03:30.482 ********* 2025-05-14 14:43:44.574926 | orchestrator | =============================================================================== 2025-05-14 14:43:44.574937 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.14s 2025-05-14 14:43:44.574948 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.15s 2025-05-14 14:43:44.574959 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 16.07s 2025-05-14 14:43:44.574969 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.31s 2025-05-14 14:43:44.574980 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2025-05-14 14:43:44.574991 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.90s 2025-05-14 14:43:44.575001 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.04s 2025-05-14 14:43:44.575012 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.33s 2025-05-14 14:43:44.575023 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.26s 2025-05-14 14:43:44.575033 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.17s 2025-05-14 14:43:44.575044 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.02s 2025-05-14 14:43:44.575055 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.93s 2025-05-14 14:43:44.575066 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2025-05-14 14:43:44.575076 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.33s 2025-05-14 14:43:44.575093 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.42s 2025-05-14 14:43:44.575104 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2025-05-14 14:43:44.575115 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.65s 2025-05-14 14:43:44.575126 | orchestrator | Check MariaDB service --------------------------------------------------- 2.62s 2025-05-14 14:43:44.575136 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.58s 2025-05-14 14:43:44.575147 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.41s 2025-05-14 14:43:44.575158 | orchestrator | 2025-05-14 14:43:44 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:44.575169 | orchestrator | 2025-05-14 14:43:44 | INFO  | Task 12f9a1d4-5791-4b71-91b5-a91a79adcb2c is in state SUCCESS 2025-05-14 14:43:44.575180 | orchestrator | 2025-05-14 14:43:44 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:44.575198 | orchestrator | 2025-05-14 14:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:47.609707 | orchestrator | 2025-05-14 14:43:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:47.610339 | orchestrator | 2025-05-14 14:43:47 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:47.612747 | orchestrator | 2025-05-14 14:43:47 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:47.613920 | orchestrator | 2025-05-14 14:43:47 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:47.613988 | orchestrator | 2025-05-14 14:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:50.662576 | orchestrator | 2025-05-14 14:43:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:50.664559 | orchestrator | 2025-05-14 14:43:50 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:50.666318 | orchestrator | 2025-05-14 14:43:50 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:50.667950 | orchestrator | 2025-05-14 14:43:50 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:50.668071 | orchestrator | 2025-05-14 14:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:53.717642 | orchestrator | 2025-05-14 14:43:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:53.718345 | orchestrator | 2025-05-14 14:43:53 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:53.719355 | orchestrator | 2025-05-14 14:43:53 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:53.723350 | orchestrator | 2025-05-14 14:43:53 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:53.723398 | orchestrator | 2025-05-14 14:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:56.771826 | orchestrator | 2025-05-14 14:43:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:56.774896 | orchestrator | 2025-05-14 14:43:56 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:56.774938 | orchestrator | 2025-05-14 14:43:56 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:56.774951 | orchestrator | 2025-05-14 14:43:56 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:56.774962 | orchestrator | 2025-05-14 14:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:43:59.815035 | orchestrator | 2025-05-14 14:43:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:43:59.816740 | orchestrator | 2025-05-14 14:43:59 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:43:59.817395 | orchestrator | 2025-05-14 14:43:59 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:43:59.818628 | orchestrator | 2025-05-14 14:43:59 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:43:59.818655 | orchestrator | 2025-05-14 14:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:02.864465 | orchestrator | 2025-05-14 14:44:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:02.864569 | orchestrator | 2025-05-14 14:44:02 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:02.865310 | orchestrator | 2025-05-14 14:44:02 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:02.865887 | orchestrator | 2025-05-14 14:44:02 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:02.865911 | orchestrator | 2025-05-14 14:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:05.916893 | orchestrator | 2025-05-14 14:44:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:05.917044 | orchestrator | 2025-05-14 14:44:05 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:05.918441 | orchestrator | 2025-05-14 14:44:05 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:05.919554 | orchestrator | 2025-05-14 14:44:05 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:05.919584 | orchestrator | 2025-05-14 14:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:08.975332 | orchestrator | 2025-05-14 14:44:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:08.975467 | orchestrator | 2025-05-14 14:44:08 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:08.976122 | orchestrator | 2025-05-14 14:44:08 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:08.976989 | orchestrator | 2025-05-14 14:44:08 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:08.977027 | orchestrator | 2025-05-14 14:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:12.036254 | orchestrator | 2025-05-14 14:44:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:12.036522 | orchestrator | 2025-05-14 14:44:12 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:12.038349 | orchestrator | 2025-05-14 14:44:12 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:12.040436 | orchestrator | 2025-05-14 14:44:12 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:12.040460 | orchestrator | 2025-05-14 14:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:15.085415 | orchestrator | 2025-05-14 14:44:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:15.091297 | orchestrator | 2025-05-14 14:44:15 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:15.094809 | orchestrator | 2025-05-14 14:44:15 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:15.100540 | orchestrator | 2025-05-14 14:44:15 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:15.100619 | orchestrator | 2025-05-14 14:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:18.153938 | orchestrator | 2025-05-14 14:44:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:18.155222 | orchestrator | 2025-05-14 14:44:18 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:18.157492 | orchestrator | 2025-05-14 14:44:18 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:18.160058 | orchestrator | 2025-05-14 14:44:18 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:18.160090 | orchestrator | 2025-05-14 14:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:21.202448 | orchestrator | 2025-05-14 14:44:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:21.202634 | orchestrator | 2025-05-14 14:44:21 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:21.203210 | orchestrator | 2025-05-14 14:44:21 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:21.204036 | orchestrator | 2025-05-14 14:44:21 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:21.204057 | orchestrator | 2025-05-14 14:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:24.240017 | orchestrator | 2025-05-14 14:44:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:24.240147 | orchestrator | 2025-05-14 14:44:24 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:24.240515 | orchestrator | 2025-05-14 14:44:24 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:24.243803 | orchestrator | 2025-05-14 14:44:24 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:24.245874 | orchestrator | 2025-05-14 14:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:27.285235 | orchestrator | 2025-05-14 14:44:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:27.286246 | orchestrator | 2025-05-14 14:44:27 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:27.287818 | orchestrator | 2025-05-14 14:44:27 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:27.289120 | orchestrator | 2025-05-14 14:44:27 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:27.289164 | orchestrator | 2025-05-14 14:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:30.335237 | orchestrator | 2025-05-14 14:44:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:30.337329 | orchestrator | 2025-05-14 14:44:30 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:30.339117 | orchestrator | 2025-05-14 14:44:30 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:30.341119 | orchestrator | 2025-05-14 14:44:30 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:30.341390 | orchestrator | 2025-05-14 14:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:33.389443 | orchestrator | 2025-05-14 14:44:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:33.390517 | orchestrator | 2025-05-14 14:44:33 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:33.392420 | orchestrator | 2025-05-14 14:44:33 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:33.393530 | orchestrator | 2025-05-14 14:44:33 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:33.393710 | orchestrator | 2025-05-14 14:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:36.447078 | orchestrator | 2025-05-14 14:44:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:36.449232 | orchestrator | 2025-05-14 14:44:36 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:36.452387 | orchestrator | 2025-05-14 14:44:36 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:36.454225 | orchestrator | 2025-05-14 14:44:36 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:36.454879 | orchestrator | 2025-05-14 14:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:39.501335 | orchestrator | 2025-05-14 14:44:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:39.501737 | orchestrator | 2025-05-14 14:44:39 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:39.502859 | orchestrator | 2025-05-14 14:44:39 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:39.504201 | orchestrator | 2025-05-14 14:44:39 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:39.504227 | orchestrator | 2025-05-14 14:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:42.551921 | orchestrator | 2025-05-14 14:44:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:42.553166 | orchestrator | 2025-05-14 14:44:42 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:42.554957 | orchestrator | 2025-05-14 14:44:42 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:42.556376 | orchestrator | 2025-05-14 14:44:42 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:42.556402 | orchestrator | 2025-05-14 14:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:45.612370 | orchestrator | 2025-05-14 14:44:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:45.617497 | orchestrator | 2025-05-14 14:44:45 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:45.617545 | orchestrator | 2025-05-14 14:44:45 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:45.617576 | orchestrator | 2025-05-14 14:44:45 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:45.617588 | orchestrator | 2025-05-14 14:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:48.671272 | orchestrator | 2025-05-14 14:44:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:48.671622 | orchestrator | 2025-05-14 14:44:48 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:48.672918 | orchestrator | 2025-05-14 14:44:48 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:48.674217 | orchestrator | 2025-05-14 14:44:48 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:48.674257 | orchestrator | 2025-05-14 14:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:51.730805 | orchestrator | 2025-05-14 14:44:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:51.730900 | orchestrator | 2025-05-14 14:44:51 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:51.730908 | orchestrator | 2025-05-14 14:44:51 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:51.730913 | orchestrator | 2025-05-14 14:44:51 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:51.730919 | orchestrator | 2025-05-14 14:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:54.787282 | orchestrator | 2025-05-14 14:44:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:54.788434 | orchestrator | 2025-05-14 14:44:54 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:54.790446 | orchestrator | 2025-05-14 14:44:54 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:54.791710 | orchestrator | 2025-05-14 14:44:54 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:54.791735 | orchestrator | 2025-05-14 14:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:44:57.844645 | orchestrator | 2025-05-14 14:44:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:44:57.846075 | orchestrator | 2025-05-14 14:44:57 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:44:57.848397 | orchestrator | 2025-05-14 14:44:57 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:44:57.850007 | orchestrator | 2025-05-14 14:44:57 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:44:57.850140 | orchestrator | 2025-05-14 14:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:00.902180 | orchestrator | 2025-05-14 14:45:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:00.905083 | orchestrator | 2025-05-14 14:45:00 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:00.906135 | orchestrator | 2025-05-14 14:45:00 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:00.907783 | orchestrator | 2025-05-14 14:45:00 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:00.910775 | orchestrator | 2025-05-14 14:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:03.968032 | orchestrator | 2025-05-14 14:45:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:03.968249 | orchestrator | 2025-05-14 14:45:03 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:03.968282 | orchestrator | 2025-05-14 14:45:03 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:03.969376 | orchestrator | 2025-05-14 14:45:03 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:03.969403 | orchestrator | 2025-05-14 14:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:07.034676 | orchestrator | 2025-05-14 14:45:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:07.036004 | orchestrator | 2025-05-14 14:45:07 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:07.039282 | orchestrator | 2025-05-14 14:45:07 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:07.039334 | orchestrator | 2025-05-14 14:45:07 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:07.039370 | orchestrator | 2025-05-14 14:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:10.088132 | orchestrator | 2025-05-14 14:45:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:10.090227 | orchestrator | 2025-05-14 14:45:10 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:10.091426 | orchestrator | 2025-05-14 14:45:10 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:10.093277 | orchestrator | 2025-05-14 14:45:10 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:10.093304 | orchestrator | 2025-05-14 14:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:13.162568 | orchestrator | 2025-05-14 14:45:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:13.164440 | orchestrator | 2025-05-14 14:45:13 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:13.166075 | orchestrator | 2025-05-14 14:45:13 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:13.167319 | orchestrator | 2025-05-14 14:45:13 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:13.167345 | orchestrator | 2025-05-14 14:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:16.225788 | orchestrator | 2025-05-14 14:45:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:16.227929 | orchestrator | 2025-05-14 14:45:16 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:16.230460 | orchestrator | 2025-05-14 14:45:16 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:16.231807 | orchestrator | 2025-05-14 14:45:16 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:16.231909 | orchestrator | 2025-05-14 14:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:19.291170 | orchestrator | 2025-05-14 14:45:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:19.293369 | orchestrator | 2025-05-14 14:45:19 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:19.295902 | orchestrator | 2025-05-14 14:45:19 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:19.297194 | orchestrator | 2025-05-14 14:45:19 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:19.297243 | orchestrator | 2025-05-14 14:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:22.364118 | orchestrator | 2025-05-14 14:45:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:22.365606 | orchestrator | 2025-05-14 14:45:22 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:22.368764 | orchestrator | 2025-05-14 14:45:22 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:22.370655 | orchestrator | 2025-05-14 14:45:22 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:22.370878 | orchestrator | 2025-05-14 14:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:25.428770 | orchestrator | 2025-05-14 14:45:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:25.431114 | orchestrator | 2025-05-14 14:45:25 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:25.434162 | orchestrator | 2025-05-14 14:45:25 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:25.434439 | orchestrator | 2025-05-14 14:45:25 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:25.434459 | orchestrator | 2025-05-14 14:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:28.484200 | orchestrator | 2025-05-14 14:45:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:28.484304 | orchestrator | 2025-05-14 14:45:28 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state STARTED 2025-05-14 14:45:28.484319 | orchestrator | 2025-05-14 14:45:28 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:28.484623 | orchestrator | 2025-05-14 14:45:28 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:28.484691 | orchestrator | 2025-05-14 14:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:31.525224 | orchestrator | 2025-05-14 14:45:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:31.526983 | orchestrator | 2025-05-14 14:45:31 | INFO  | Task 882f04dc-0cf2-4009-8f5d-8dd606c23a71 is in state SUCCESS 2025-05-14 14:45:31.528631 | orchestrator | 2025-05-14 14:45:31.528668 | orchestrator | 2025-05-14 14:45:31.528722 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:45:31.528735 | orchestrator | 2025-05-14 14:45:31.528816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:45:31.528830 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.310) 0:00:00.310 ********* 2025-05-14 14:45:31.528868 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.528882 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.528892 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.528903 | orchestrator | 2025-05-14 14:45:31.528914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:45:31.528925 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.417) 0:00:00.727 ********* 2025-05-14 14:45:31.528971 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-14 14:45:31.528984 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-14 14:45:31.528995 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-14 14:45:31.529006 | orchestrator | 2025-05-14 14:45:31.529017 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-14 14:45:31.529028 | orchestrator | 2025-05-14 14:45:31.529039 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 14:45:31.529050 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.342) 0:00:01.069 ********* 2025-05-14 14:45:31.529061 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:45:31.529073 | orchestrator | 2025-05-14 14:45:31.529084 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-14 14:45:31.529095 | orchestrator | Wednesday 14 May 2025 14:43:47 +0000 (0:00:00.778) 0:00:01.848 ********* 2025-05-14 14:45:31.529113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.529186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.529202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.529223 | orchestrator | 2025-05-14 14:45:31.529235 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-14 14:45:31.529247 | orchestrator | Wednesday 14 May 2025 14:43:49 +0000 (0:00:01.894) 0:00:03.742 ********* 2025-05-14 14:45:31.529260 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.529272 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.529289 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.529301 | orchestrator | 2025-05-14 14:45:31.529313 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 14:45:31.529325 | orchestrator | Wednesday 14 May 2025 14:43:49 +0000 (0:00:00.291) 0:00:04.033 ********* 2025-05-14 14:45:31.529345 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 14:45:31.529358 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 14:45:31.529370 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 14:45:31.529383 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 14:45:31.529402 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 14:45:31.529422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-14 14:45:31.529442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 14:45:31.529460 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 14:45:31.529476 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 14:45:31.529488 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 14:45:31.529498 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 14:45:31.529509 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 14:45:31.529520 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-14 14:45:31.529531 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 14:45:31.529541 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 14:45:31.529552 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 14:45:31.529563 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 14:45:31.529574 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 14:45:31.529584 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 14:45:31.529595 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-14 14:45:31.529606 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 14:45:31.529617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-14 14:45:31.529640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-14 14:45:31.529651 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-14 14:45:31.529662 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-14 14:45:31.529673 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-14 14:45:31.529685 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-14 14:45:31.529696 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-14 14:45:31.529707 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-14 14:45:31.529718 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-14 14:45:31.529729 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-14 14:45:31.529740 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-14 14:45:31.529750 | orchestrator | 2025-05-14 14:45:31.529761 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.529773 | orchestrator | Wednesday 14 May 2025 14:43:50 +0000 (0:00:01.106) 0:00:05.140 ********* 2025-05-14 14:45:31.529784 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.529795 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.529805 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.529816 | orchestrator | 2025-05-14 14:45:31.529833 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.529915 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.490) 0:00:05.630 ********* 2025-05-14 14:45:31.529927 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.529939 | orchestrator | 2025-05-14 14:45:31.529959 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.529970 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.141) 0:00:05.772 ********* 2025-05-14 14:45:31.529981 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.529992 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.530003 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.530061 | orchestrator | 2025-05-14 14:45:31.530076 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.530088 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.301) 0:00:06.073 ********* 2025-05-14 14:45:31.530098 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.530109 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.530120 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.530131 | orchestrator | 2025-05-14 14:45:31.530142 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.530152 | orchestrator | Wednesday 14 May 2025 14:43:52 +0000 (0:00:00.559) 0:00:06.632 ********* 2025-05-14 14:45:31.530163 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530174 | orchestrator | 2025-05-14 14:45:31.530185 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.530195 | orchestrator | Wednesday 14 May 2025 14:43:52 +0000 (0:00:00.144) 0:00:06.777 ********* 2025-05-14 14:45:31.530215 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530226 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.530236 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.530247 | orchestrator | 2025-05-14 14:45:31.530257 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.530268 | orchestrator | Wednesday 14 May 2025 14:43:53 +0000 (0:00:00.454) 0:00:07.231 ********* 2025-05-14 14:45:31.530279 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.530290 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.530301 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.530312 | orchestrator | 2025-05-14 14:45:31.530323 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.530333 | orchestrator | Wednesday 14 May 2025 14:43:53 +0000 (0:00:00.496) 0:00:07.728 ********* 2025-05-14 14:45:31.530344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530354 | orchestrator | 2025-05-14 14:45:31.530365 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.530376 | orchestrator | Wednesday 14 May 2025 14:43:53 +0000 (0:00:00.139) 0:00:07.868 ********* 2025-05-14 14:45:31.530387 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530398 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.530408 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.530419 | orchestrator | 2025-05-14 14:45:31.530468 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.530481 | orchestrator | Wednesday 14 May 2025 14:43:54 +0000 (0:00:00.418) 0:00:08.286 ********* 2025-05-14 14:45:31.530492 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.530504 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.530515 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.530525 | orchestrator | 2025-05-14 14:45:31.530536 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.530547 | orchestrator | Wednesday 14 May 2025 14:43:54 +0000 (0:00:00.461) 0:00:08.747 ********* 2025-05-14 14:45:31.530558 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530569 | orchestrator | 2025-05-14 14:45:31.530580 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.530590 | orchestrator | Wednesday 14 May 2025 14:43:54 +0000 (0:00:00.161) 0:00:08.909 ********* 2025-05-14 14:45:31.530601 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530612 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.530622 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.530633 | orchestrator | 2025-05-14 14:45:31.530643 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.530654 | orchestrator | Wednesday 14 May 2025 14:43:55 +0000 (0:00:00.560) 0:00:09.470 ********* 2025-05-14 14:45:31.530665 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.530675 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.530686 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.530696 | orchestrator | 2025-05-14 14:45:31.530707 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.530718 | orchestrator | Wednesday 14 May 2025 14:43:55 +0000 (0:00:00.364) 0:00:09.834 ********* 2025-05-14 14:45:31.530728 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530739 | orchestrator | 2025-05-14 14:45:31.530750 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.530760 | orchestrator | Wednesday 14 May 2025 14:43:55 +0000 (0:00:00.281) 0:00:10.116 ********* 2025-05-14 14:45:31.530771 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530782 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.530792 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.530803 | orchestrator | 2025-05-14 14:45:31.530814 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.530824 | orchestrator | Wednesday 14 May 2025 14:43:56 +0000 (0:00:00.459) 0:00:10.575 ********* 2025-05-14 14:45:31.530871 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.530884 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.530895 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.530906 | orchestrator | 2025-05-14 14:45:31.530916 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.530927 | orchestrator | Wednesday 14 May 2025 14:43:57 +0000 (0:00:00.855) 0:00:11.431 ********* 2025-05-14 14:45:31.530938 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530948 | orchestrator | 2025-05-14 14:45:31.530959 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.530970 | orchestrator | Wednesday 14 May 2025 14:43:57 +0000 (0:00:00.108) 0:00:11.539 ********* 2025-05-14 14:45:31.530981 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.530998 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531009 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531019 | orchestrator | 2025-05-14 14:45:31.531030 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.531041 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:00.731) 0:00:12.270 ********* 2025-05-14 14:45:31.531060 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.531071 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.531082 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.531092 | orchestrator | 2025-05-14 14:45:31.531103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.531113 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:00.508) 0:00:12.778 ********* 2025-05-14 14:45:31.531124 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531134 | orchestrator | 2025-05-14 14:45:31.531145 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.531156 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:00.114) 0:00:12.893 ********* 2025-05-14 14:45:31.531167 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531177 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531188 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531198 | orchestrator | 2025-05-14 14:45:31.531209 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.531220 | orchestrator | Wednesday 14 May 2025 14:43:59 +0000 (0:00:00.633) 0:00:13.527 ********* 2025-05-14 14:45:31.531231 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.531241 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.531252 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.531262 | orchestrator | 2025-05-14 14:45:31.531273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.531284 | orchestrator | Wednesday 14 May 2025 14:43:59 +0000 (0:00:00.420) 0:00:13.947 ********* 2025-05-14 14:45:31.531294 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531305 | orchestrator | 2025-05-14 14:45:31.531315 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.531326 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.247) 0:00:14.195 ********* 2025-05-14 14:45:31.531337 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531369 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531380 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531391 | orchestrator | 2025-05-14 14:45:31.531402 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.531412 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.293) 0:00:14.489 ********* 2025-05-14 14:45:31.531435 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.531447 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.531457 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.531468 | orchestrator | 2025-05-14 14:45:31.531479 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.531490 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.484) 0:00:14.973 ********* 2025-05-14 14:45:31.531501 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531520 | orchestrator | 2025-05-14 14:45:31.531531 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.531542 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.116) 0:00:15.089 ********* 2025-05-14 14:45:31.531553 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531564 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531575 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531585 | orchestrator | 2025-05-14 14:45:31.531596 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.531607 | orchestrator | Wednesday 14 May 2025 14:44:01 +0000 (0:00:00.454) 0:00:15.544 ********* 2025-05-14 14:45:31.531618 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.531629 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.531640 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.531650 | orchestrator | 2025-05-14 14:45:31.531662 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.531673 | orchestrator | Wednesday 14 May 2025 14:44:01 +0000 (0:00:00.498) 0:00:16.043 ********* 2025-05-14 14:45:31.531683 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531694 | orchestrator | 2025-05-14 14:45:31.531705 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.531716 | orchestrator | Wednesday 14 May 2025 14:44:02 +0000 (0:00:00.120) 0:00:16.163 ********* 2025-05-14 14:45:31.531727 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531737 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531748 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531759 | orchestrator | 2025-05-14 14:45:31.531769 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 14:45:31.531780 | orchestrator | Wednesday 14 May 2025 14:44:02 +0000 (0:00:00.399) 0:00:16.563 ********* 2025-05-14 14:45:31.531791 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:45:31.531802 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:45:31.531812 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:45:31.531823 | orchestrator | 2025-05-14 14:45:31.531834 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 14:45:31.531868 | orchestrator | Wednesday 14 May 2025 14:44:03 +0000 (0:00:00.661) 0:00:17.224 ********* 2025-05-14 14:45:31.531880 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531890 | orchestrator | 2025-05-14 14:45:31.531901 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 14:45:31.531912 | orchestrator | Wednesday 14 May 2025 14:44:03 +0000 (0:00:00.209) 0:00:17.433 ********* 2025-05-14 14:45:31.531923 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.531934 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.531945 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.531955 | orchestrator | 2025-05-14 14:45:31.531966 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-14 14:45:31.531976 | orchestrator | Wednesday 14 May 2025 14:44:03 +0000 (0:00:00.692) 0:00:18.125 ********* 2025-05-14 14:45:31.531987 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:45:31.531999 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:45:31.532009 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:45:31.532020 | orchestrator | 2025-05-14 14:45:31.532030 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-14 14:45:31.532047 | orchestrator | Wednesday 14 May 2025 14:44:07 +0000 (0:00:03.144) 0:00:21.271 ********* 2025-05-14 14:45:31.532058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 14:45:31.532077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 14:45:31.532089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 14:45:31.532099 | orchestrator | 2025-05-14 14:45:31.532110 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-14 14:45:31.532128 | orchestrator | Wednesday 14 May 2025 14:44:10 +0000 (0:00:03.494) 0:00:24.766 ********* 2025-05-14 14:45:31.532139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 14:45:31.532150 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 14:45:31.532161 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 14:45:31.532172 | orchestrator | 2025-05-14 14:45:31.532183 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-14 14:45:31.532194 | orchestrator | Wednesday 14 May 2025 14:44:13 +0000 (0:00:02.933) 0:00:27.699 ********* 2025-05-14 14:45:31.532204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 14:45:31.532215 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 14:45:31.532226 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 14:45:31.532237 | orchestrator | 2025-05-14 14:45:31.532248 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-14 14:45:31.532259 | orchestrator | Wednesday 14 May 2025 14:44:15 +0000 (0:00:02.057) 0:00:29.756 ********* 2025-05-14 14:45:31.532269 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.532281 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.532291 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.532302 | orchestrator | 2025-05-14 14:45:31.532313 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-14 14:45:31.532323 | orchestrator | Wednesday 14 May 2025 14:44:16 +0000 (0:00:00.448) 0:00:30.205 ********* 2025-05-14 14:45:31.532334 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.532345 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.532356 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.532367 | orchestrator | 2025-05-14 14:45:31.532377 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 14:45:31.532388 | orchestrator | Wednesday 14 May 2025 14:44:16 +0000 (0:00:00.400) 0:00:30.605 ********* 2025-05-14 14:45:31.532399 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:45:31.532410 | orchestrator | 2025-05-14 14:45:31.532421 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-14 14:45:31.532432 | orchestrator | Wednesday 14 May 2025 14:44:17 +0000 (0:00:00.715) 0:00:31.321 ********* 2025-05-14 14:45:31.532453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532567 | orchestrator | 2025-05-14 14:45:31.532579 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-14 14:45:31.532590 | orchestrator | Wednesday 14 May 2025 14:44:18 +0000 (0:00:01.731) 0:00:33.052 ********* 2025-05-14 14:45:31.532602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532614 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.532642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532663 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.532675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532687 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.532705 | orchestrator | 2025-05-14 14:45:31.532716 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-14 14:45:31.532726 | orchestrator | Wednesday 14 May 2025 14:44:19 +0000 (0:00:01.022) 0:00:34.075 ********* 2025-05-14 14:45:31.532753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532766 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.532777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532797 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.532824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 14:45:31.532856 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.532870 | orchestrator | 2025-05-14 14:45:31.532881 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-14 14:45:31.532892 | orchestrator | Wednesday 14 May 2025 14:44:21 +0000 (0:00:01.311) 0:00:35.387 ********* 2025-05-14 14:45:31.532916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 14:45:31.532985 | orchestrator | 2025-05-14 14:45:31.532996 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 14:45:31.533007 | orchestrator | Wednesday 14 May 2025 14:44:25 +0000 (0:00:04.654) 0:00:40.041 ********* 2025-05-14 14:45:31.533018 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:45:31.533030 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:45:31.533040 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:45:31.533051 | orchestrator | 2025-05-14 14:45:31.533062 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 14:45:31.533073 | orchestrator | Wednesday 14 May 2025 14:44:26 +0000 (0:00:00.340) 0:00:40.382 ********* 2025-05-14 14:45:31.533084 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:45:31.533171 | orchestrator | 2025-05-14 14:45:31.533184 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-14 14:45:31.533195 | orchestrator | Wednesday 14 May 2025 14:44:26 +0000 (0:00:00.502) 0:00:40.884 ********* 2025-05-14 14:45:31.533205 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:45:31.533216 | orchestrator | 2025-05-14 14:45:31.533227 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-14 14:45:31.533238 | orchestrator | Wednesday 14 May 2025 14:44:29 +0000 (0:00:02.573) 0:00:43.458 ********* 2025-05-14 14:45:31.533249 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:45:31.533260 | orchestrator | 2025-05-14 14:45:31.533271 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-14 14:45:31.533282 | orchestrator | Wednesday 14 May 2025 14:44:31 +0000 (0:00:02.284) 0:00:45.742 ********* 2025-05-14 14:45:31.533293 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:45:31.533303 | orchestrator | 2025-05-14 14:45:31.533314 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 14:45:31.533325 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:13.961) 0:00:59.703 ********* 2025-05-14 14:45:31.533336 | orchestrator | 2025-05-14 14:45:31.533346 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 14:45:31.533358 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:00.060) 0:00:59.764 ********* 2025-05-14 14:45:31.533369 | orchestrator | 2025-05-14 14:45:31.533380 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 14:45:31.533399 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:00.218) 0:00:59.982 ********* 2025-05-14 14:45:31.533410 | orchestrator | 2025-05-14 14:45:31.533420 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-14 14:45:31.533431 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:00.069) 0:01:00.052 ********* 2025-05-14 14:45:31.533443 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:45:31.533453 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:45:31.533464 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:45:31.533474 | orchestrator | 2025-05-14 14:45:31.533485 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:45:31.533496 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 14:45:31.533508 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 14:45:31.533519 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 14:45:31.533530 | orchestrator | 2025-05-14 14:45:31.533541 | orchestrator | 2025-05-14 14:45:31.533552 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:45:31.533563 | orchestrator | Wednesday 14 May 2025 14:45:29 +0000 (0:00:43.530) 0:01:43.582 ********* 2025-05-14 14:45:31.533574 | orchestrator | =============================================================================== 2025-05-14 14:45:31.533585 | orchestrator | horizon : Restart horizon container ------------------------------------ 43.53s 2025-05-14 14:45:31.533596 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.96s 2025-05-14 14:45:31.533606 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.65s 2025-05-14 14:45:31.533617 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.49s 2025-05-14 14:45:31.533628 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.15s 2025-05-14 14:45:31.533639 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.93s 2025-05-14 14:45:31.533650 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.57s 2025-05-14 14:45:31.533660 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.28s 2025-05-14 14:45:31.533671 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.06s 2025-05-14 14:45:31.533682 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.89s 2025-05-14 14:45:31.533699 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.73s 2025-05-14 14:45:31.533710 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.31s 2025-05-14 14:45:31.533721 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.11s 2025-05-14 14:45:31.533739 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.02s 2025-05-14 14:45:31.533751 | orchestrator | horizon : Update policy file name --------------------------------------- 0.86s 2025-05-14 14:45:31.533762 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-05-14 14:45:31.533772 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.73s 2025-05-14 14:45:31.533783 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-05-14 14:45:31.533794 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.69s 2025-05-14 14:45:31.533805 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2025-05-14 14:45:31.533816 | orchestrator | 2025-05-14 14:45:31 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:31.533827 | orchestrator | 2025-05-14 14:45:31 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:31.533892 | orchestrator | 2025-05-14 14:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:34.582142 | orchestrator | 2025-05-14 14:45:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:34.584409 | orchestrator | 2025-05-14 14:45:34 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:34.587458 | orchestrator | 2025-05-14 14:45:34 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:34.587536 | orchestrator | 2025-05-14 14:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:37.635169 | orchestrator | 2025-05-14 14:45:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:37.637720 | orchestrator | 2025-05-14 14:45:37 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:37.639986 | orchestrator | 2025-05-14 14:45:37 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:37.640268 | orchestrator | 2025-05-14 14:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:40.689146 | orchestrator | 2025-05-14 14:45:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:40.691018 | orchestrator | 2025-05-14 14:45:40 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:40.693647 | orchestrator | 2025-05-14 14:45:40 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:40.693680 | orchestrator | 2025-05-14 14:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:43.745921 | orchestrator | 2025-05-14 14:45:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:43.746882 | orchestrator | 2025-05-14 14:45:43 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:43.748310 | orchestrator | 2025-05-14 14:45:43 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state STARTED 2025-05-14 14:45:43.748356 | orchestrator | 2025-05-14 14:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:46.795527 | orchestrator | 2025-05-14 14:45:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:46.795734 | orchestrator | 2025-05-14 14:45:46 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:45:46.797138 | orchestrator | 2025-05-14 14:45:46 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:46.799754 | orchestrator | 2025-05-14 14:45:46.799784 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:45:46.799789 | orchestrator | 2025-05-14 14:45:46.799794 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-14 14:45:46.799800 | orchestrator | 2025-05-14 14:45:46.799804 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 14:45:46.799809 | orchestrator | Wednesday 14 May 2025 14:43:30 +0000 (0:00:01.099) 0:00:01.099 ********* 2025-05-14 14:45:46.799815 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:45:46.799820 | orchestrator | 2025-05-14 14:45:46.799825 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 14:45:46.799829 | orchestrator | Wednesday 14 May 2025 14:43:31 +0000 (0:00:00.535) 0:00:01.635 ********* 2025-05-14 14:45:46.799834 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 14:45:46.799839 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 14:45:46.799843 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 14:45:46.799896 | orchestrator | 2025-05-14 14:45:46.799913 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 14:45:46.799918 | orchestrator | Wednesday 14 May 2025 14:43:32 +0000 (0:00:00.835) 0:00:02.470 ********* 2025-05-14 14:45:46.799922 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:45:46.799927 | orchestrator | 2025-05-14 14:45:46.799931 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 14:45:46.799936 | orchestrator | Wednesday 14 May 2025 14:43:33 +0000 (0:00:00.778) 0:00:03.249 ********* 2025-05-14 14:45:46.799940 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.799945 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.799949 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.799954 | orchestrator | 2025-05-14 14:45:46.799958 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 14:45:46.799962 | orchestrator | Wednesday 14 May 2025 14:43:33 +0000 (0:00:00.779) 0:00:04.028 ********* 2025-05-14 14:45:46.799966 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.799971 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.799975 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.799979 | orchestrator | 2025-05-14 14:45:46.799983 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 14:45:46.799988 | orchestrator | Wednesday 14 May 2025 14:43:34 +0000 (0:00:00.316) 0:00:04.344 ********* 2025-05-14 14:45:46.799992 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.799996 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800000 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800004 | orchestrator | 2025-05-14 14:45:46.800009 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 14:45:46.800013 | orchestrator | Wednesday 14 May 2025 14:43:34 +0000 (0:00:00.811) 0:00:05.156 ********* 2025-05-14 14:45:46.800017 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800021 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800026 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800030 | orchestrator | 2025-05-14 14:45:46.800034 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 14:45:46.800038 | orchestrator | Wednesday 14 May 2025 14:43:35 +0000 (0:00:00.301) 0:00:05.457 ********* 2025-05-14 14:45:46.800043 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800047 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800051 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800055 | orchestrator | 2025-05-14 14:45:46.800059 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 14:45:46.800064 | orchestrator | Wednesday 14 May 2025 14:43:35 +0000 (0:00:00.303) 0:00:05.761 ********* 2025-05-14 14:45:46.800068 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800072 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800076 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800081 | orchestrator | 2025-05-14 14:45:46.800085 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 14:45:46.800090 | orchestrator | Wednesday 14 May 2025 14:43:35 +0000 (0:00:00.323) 0:00:06.084 ********* 2025-05-14 14:45:46.800094 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800099 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800103 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800107 | orchestrator | 2025-05-14 14:45:46.800112 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 14:45:46.800116 | orchestrator | Wednesday 14 May 2025 14:43:36 +0000 (0:00:00.452) 0:00:06.537 ********* 2025-05-14 14:45:46.800120 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800124 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800128 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800133 | orchestrator | 2025-05-14 14:45:46.800137 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 14:45:46.800146 | orchestrator | Wednesday 14 May 2025 14:43:36 +0000 (0:00:00.306) 0:00:06.843 ********* 2025-05-14 14:45:46.800150 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 14:45:46.800154 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:45:46.800159 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:45:46.800163 | orchestrator | 2025-05-14 14:45:46.800176 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 14:45:46.800181 | orchestrator | Wednesday 14 May 2025 14:43:37 +0000 (0:00:00.645) 0:00:07.489 ********* 2025-05-14 14:45:46.800185 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800195 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800200 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800204 | orchestrator | 2025-05-14 14:45:46.800208 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 14:45:46.800212 | orchestrator | Wednesday 14 May 2025 14:43:37 +0000 (0:00:00.431) 0:00:07.921 ********* 2025-05-14 14:45:46.800225 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 14:45:46.800229 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:45:46.800234 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:45:46.800238 | orchestrator | 2025-05-14 14:45:46.800242 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 14:45:46.800247 | orchestrator | Wednesday 14 May 2025 14:43:40 +0000 (0:00:02.443) 0:00:10.364 ********* 2025-05-14 14:45:46.800251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:45:46.800255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:45:46.800260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:45:46.800264 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800268 | orchestrator | 2025-05-14 14:45:46.800272 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 14:45:46.800276 | orchestrator | Wednesday 14 May 2025 14:43:40 +0000 (0:00:00.489) 0:00:10.854 ********* 2025-05-14 14:45:46.800285 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800301 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800306 | orchestrator | 2025-05-14 14:45:46.800310 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 14:45:46.800314 | orchestrator | Wednesday 14 May 2025 14:43:41 +0000 (0:00:00.714) 0:00:11.568 ********* 2025-05-14 14:45:46.800320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:45:46.800341 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800345 | orchestrator | 2025-05-14 14:45:46.800349 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 14:45:46.800353 | orchestrator | Wednesday 14 May 2025 14:43:41 +0000 (0:00:00.171) 0:00:11.740 ********* 2025-05-14 14:45:46.800359 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '5e2cf110b535', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 14:43:38.653716', 'end': '2025-05-14 14:43:38.699792', 'delta': '0:00:00.046076', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5e2cf110b535'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 14:45:46.800373 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a0f763ca12a4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 14:43:39.209571', 'end': '2025-05-14 14:43:39.239535', 'delta': '0:00:00.029964', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a0f763ca12a4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 14:45:46.800381 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'd9558916fb2d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 14:43:39.770147', 'end': '2025-05-14 14:43:39.805960', 'delta': '0:00:00.035813', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d9558916fb2d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 14:45:46.800386 | orchestrator | 2025-05-14 14:45:46.800391 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 14:45:46.800396 | orchestrator | Wednesday 14 May 2025 14:43:41 +0000 (0:00:00.201) 0:00:11.941 ********* 2025-05-14 14:45:46.800401 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800405 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.800410 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.800414 | orchestrator | 2025-05-14 14:45:46.800419 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 14:45:46.800424 | orchestrator | Wednesday 14 May 2025 14:43:42 +0000 (0:00:00.477) 0:00:12.418 ********* 2025-05-14 14:45:46.800429 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 14:45:46.800437 | orchestrator | 2025-05-14 14:45:46.800441 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 14:45:46.800446 | orchestrator | Wednesday 14 May 2025 14:43:43 +0000 (0:00:01.440) 0:00:13.859 ********* 2025-05-14 14:45:46.800451 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800455 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800460 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800465 | orchestrator | 2025-05-14 14:45:46.800469 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 14:45:46.800474 | orchestrator | Wednesday 14 May 2025 14:43:44 +0000 (0:00:00.508) 0:00:14.368 ********* 2025-05-14 14:45:46.800479 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800483 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800488 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800493 | orchestrator | 2025-05-14 14:45:46.800497 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:45:46.800502 | orchestrator | Wednesday 14 May 2025 14:43:44 +0000 (0:00:00.480) 0:00:14.848 ********* 2025-05-14 14:45:46.800507 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800511 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800516 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800521 | orchestrator | 2025-05-14 14:45:46.800526 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 14:45:46.800530 | orchestrator | Wednesday 14 May 2025 14:43:44 +0000 (0:00:00.335) 0:00:15.183 ********* 2025-05-14 14:45:46.800535 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.800540 | orchestrator | 2025-05-14 14:45:46.800544 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 14:45:46.800549 | orchestrator | Wednesday 14 May 2025 14:43:45 +0000 (0:00:00.158) 0:00:15.342 ********* 2025-05-14 14:45:46.800554 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800558 | orchestrator | 2025-05-14 14:45:46.800563 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:45:46.800568 | orchestrator | Wednesday 14 May 2025 14:43:45 +0000 (0:00:00.282) 0:00:15.624 ********* 2025-05-14 14:45:46.800572 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800577 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800582 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800586 | orchestrator | 2025-05-14 14:45:46.800591 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 14:45:46.800596 | orchestrator | Wednesday 14 May 2025 14:43:45 +0000 (0:00:00.549) 0:00:16.174 ********* 2025-05-14 14:45:46.800600 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800605 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800610 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800614 | orchestrator | 2025-05-14 14:45:46.800619 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 14:45:46.800623 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.317) 0:00:16.492 ********* 2025-05-14 14:45:46.800628 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800633 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800637 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800642 | orchestrator | 2025-05-14 14:45:46.800647 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 14:45:46.800652 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.310) 0:00:16.802 ********* 2025-05-14 14:45:46.800656 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800661 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800668 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800673 | orchestrator | 2025-05-14 14:45:46.800678 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 14:45:46.800683 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.341) 0:00:17.143 ********* 2025-05-14 14:45:46.800688 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800696 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800700 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800705 | orchestrator | 2025-05-14 14:45:46.800710 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 14:45:46.800715 | orchestrator | Wednesday 14 May 2025 14:43:47 +0000 (0:00:00.632) 0:00:17.776 ********* 2025-05-14 14:45:46.800720 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800724 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800729 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800734 | orchestrator | 2025-05-14 14:45:46.800738 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 14:45:46.800742 | orchestrator | Wednesday 14 May 2025 14:43:47 +0000 (0:00:00.331) 0:00:18.108 ********* 2025-05-14 14:45:46.800747 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.800751 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.800758 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.800763 | orchestrator | 2025-05-14 14:45:46.800767 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 14:45:46.800771 | orchestrator | Wednesday 14 May 2025 14:43:48 +0000 (0:00:00.373) 0:00:18.481 ********* 2025-05-14 14:45:46.800776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd-osd--block--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd', 'dm-uuid-LVM-oe3XGIkJvHmuTCqQQRTGeCAkYgXQgXzd2RsjSfffTM4F5lVpw6hFc3ttiSpRdAV2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46afb65a--1642--5955--80d8--115babed40cc-osd--block--46afb65a--1642--5955--80d8--115babed40cc', 'dm-uuid-LVM-cRQSoffzeBotqALG8g4q1BtUZeu0J29ltwdTcO4fGybmK06xTtxeLfLzLfxo9j4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6-osd--block--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6', 'dm-uuid-LVM-vcT5VQ7OUb1W830jE1EoSTWdQqfqUOnH4im0PxSi38kU4Xy2HwDjl335trLf3UOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6248da54--4321--5f95--9f37--ef0f81563cc8-osd--block--6248da54--4321--5f95--9f37--ef0f81563cc8', 'dm-uuid-LVM-XOVX0EsNquAUng9MsJj6p0l2DeYaxh8TzqDWDfU2GynMgn0Af0ekd82IHodU8i5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part1', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part14', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part15', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part16', 'scsi-SQEMU_QEMU_HARDDISK_71c7cd21-af9a-43c7-833e-47c0c8f8b580-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_301a6f0d-44e1-4338-b25e-44cbfe2d08d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd-osd--block--5e8c3a6b--4eea--5bb3--8225--c520f5fcabbd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaXtNO-vR2R-lKjU-8sZ1-xvgA-ZmuC-tIHpBb', 'scsi-0QEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4', 'scsi-SQEMU_QEMU_HARDDISK_2969d5d4-6b61-4174-959d-91757001b3d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6-osd--block--904dffa8--69ed--5eff--9e62--bfdd56e5c3c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oxqQbH-cdgO-Wj5n-JI4t-RKdE-A9iz-EbXT3c', 'scsi-0QEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1', 'scsi-SQEMU_QEMU_HARDDISK_1515eacf-7c8c-4c61-b2e2-7b383c3e44c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46afb65a--1642--5955--80d8--115babed40cc-osd--block--46afb65a--1642--5955--80d8--115babed40cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvOutx-C5Ro-07d3-5Kq6-8Pik-TgN0-grLCqe', 'scsi-0QEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e', 'scsi-SQEMU_QEMU_HARDDISK_01187494-c8f8-452b-8a71-7cb0e866cd7e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6248da54--4321--5f95--9f37--ef0f81563cc8-osd--block--6248da54--4321--5f95--9f37--ef0f81563cc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yeQNr2-HpDv-wSbK-oErd-fkvR-pHdP-UTSgbM', 'scsi-0QEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728', 'scsi-SQEMU_QEMU_HARDDISK_60bd9cea-a91d-498b-bf8e-aa0954da2728'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4', 'scsi-SQEMU_QEMU_HARDDISK_40b8d6d7-4545-465c-9849-c8d6aa81e9b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194', 'scsi-SQEMU_QEMU_HARDDISK_ad0bac29-b6ca-48d2-bfa6-0fc9d0f4c194'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.800984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dde3cc5c--c032--592e--96b0--b740b8614a8d-osd--block--dde3cc5c--c032--592e--96b0--b740b8614a8d', 'dm-uuid-LVM-gDtDio710LxXMnniF8MCCubUAbaCS8lJf0GbSZpnRCALMbe8pjMxZ0b9LbRij3gi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.800993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size'2025-05-14 14:45:46 | INFO  | Task 08194bc3-e4c3-4c95-b19a-89eb2da0e332 is in state SUCCESS 2025-05-14 14:45:46.801006 | orchestrator | 2025-05-14 14:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:46.801011 | orchestrator | : '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801015 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5402478b--0937--58a5--a80f--00ed6e381d0d-osd--block--5402478b--0937--58a5--a80f--00ed6e381d0d', 'dm-uuid-LVM-3Mlz0P66Tjdwlu1DlY9xpPQCdvzGvTq1ozttby7WifLg5gXjuo4MSBXPdCe2HT07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801028 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:45:46.801077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b71e7922-b678-4f73-a76a-9c385d8067f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dde3cc5c--c032--592e--96b0--b740b8614a8d-osd--block--dde3cc5c--c032--592e--96b0--b740b8614a8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qF2iu1-tNFH-bV6n-x1bQ-1fPY-U0wy-D2l4PA', 'scsi-0QEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640', 'scsi-SQEMU_QEMU_HARDDISK_3506369f-dad3-424e-bb0e-001afa60c640'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5402478b--0937--58a5--a80f--00ed6e381d0d-osd--block--5402478b--0937--58a5--a80f--00ed6e381d0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cPZ6GG-hKOT-fTQ0-JH3B-bmXl-XIUJ-zXQGfW', 'scsi-0QEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb', 'scsi-SQEMU_QEMU_HARDDISK_0e7ca56e-ad5f-44b1-a048-99cbd42b26bb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb', 'scsi-SQEMU_QEMU_HARDDISK_7e927c4f-d02c-4f8e-99e1-94b2128e93eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:45:46.801110 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801115 | orchestrator | 2025-05-14 14:45:46.801119 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 14:45:46.801124 | orchestrator | Wednesday 14 May 2025 14:43:49 +0000 (0:00:00.799) 0:00:19.281 ********* 2025-05-14 14:45:46.801128 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 14:45:46.801132 | orchestrator | 2025-05-14 14:45:46.801137 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 14:45:46.801141 | orchestrator | Wednesday 14 May 2025 14:43:50 +0000 (0:00:01.668) 0:00:20.949 ********* 2025-05-14 14:45:46.801145 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801150 | orchestrator | 2025-05-14 14:45:46.801154 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 14:45:46.801158 | orchestrator | Wednesday 14 May 2025 14:43:50 +0000 (0:00:00.154) 0:00:21.103 ********* 2025-05-14 14:45:46.801162 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801167 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801171 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801175 | orchestrator | 2025-05-14 14:45:46.801179 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 14:45:46.801184 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.399) 0:00:21.502 ********* 2025-05-14 14:45:46.801195 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801200 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801204 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801208 | orchestrator | 2025-05-14 14:45:46.801212 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 14:45:46.801216 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.712) 0:00:22.215 ********* 2025-05-14 14:45:46.801221 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801225 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801229 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801233 | orchestrator | 2025-05-14 14:45:46.801238 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:45:46.801242 | orchestrator | Wednesday 14 May 2025 14:43:52 +0000 (0:00:00.341) 0:00:22.556 ********* 2025-05-14 14:45:46.801246 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801250 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801255 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801259 | orchestrator | 2025-05-14 14:45:46.801263 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:45:46.801268 | orchestrator | Wednesday 14 May 2025 14:43:53 +0000 (0:00:01.011) 0:00:23.568 ********* 2025-05-14 14:45:46.801272 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801276 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801280 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801285 | orchestrator | 2025-05-14 14:45:46.801289 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:45:46.801293 | orchestrator | Wednesday 14 May 2025 14:43:53 +0000 (0:00:00.316) 0:00:23.884 ********* 2025-05-14 14:45:46.801297 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801302 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801306 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801310 | orchestrator | 2025-05-14 14:45:46.801314 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:45:46.801319 | orchestrator | Wednesday 14 May 2025 14:43:54 +0000 (0:00:00.436) 0:00:24.320 ********* 2025-05-14 14:45:46.801323 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801327 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801331 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801335 | orchestrator | 2025-05-14 14:45:46.801340 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 14:45:46.801344 | orchestrator | Wednesday 14 May 2025 14:43:54 +0000 (0:00:00.321) 0:00:24.642 ********* 2025-05-14 14:45:46.801348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:45:46.801352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:45:46.801356 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:45:46.801361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:45:46.801365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:45:46.801373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:45:46.801377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:45:46.801382 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:45:46.801389 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:45:46.801398 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801402 | orchestrator | 2025-05-14 14:45:46.801406 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 14:45:46.801410 | orchestrator | Wednesday 14 May 2025 14:43:55 +0000 (0:00:01.004) 0:00:25.646 ********* 2025-05-14 14:45:46.801415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:45:46.801422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:45:46.801427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:45:46.801431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:45:46.801435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:45:46.801439 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801444 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:45:46.801448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:45:46.801454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:45:46.801459 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:45:46.801467 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801471 | orchestrator | 2025-05-14 14:45:46.801476 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 14:45:46.801480 | orchestrator | Wednesday 14 May 2025 14:43:56 +0000 (0:00:00.705) 0:00:26.351 ********* 2025-05-14 14:45:46.801484 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 14:45:46.801489 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 14:45:46.801493 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 14:45:46.801497 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 14:45:46.801501 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 14:45:46.801505 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 14:45:46.801510 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 14:45:46.801514 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 14:45:46.801518 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 14:45:46.801522 | orchestrator | 2025-05-14 14:45:46.801526 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 14:45:46.801531 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:01.934) 0:00:28.286 ********* 2025-05-14 14:45:46.801535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:45:46.801539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:45:46.801543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:45:46.801548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:45:46.801552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:45:46.801556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:45:46.801560 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801564 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:45:46.801573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:45:46.801577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:45:46.801581 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801585 | orchestrator | 2025-05-14 14:45:46.801590 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 14:45:46.801594 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:00.627) 0:00:28.913 ********* 2025-05-14 14:45:46.801598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 14:45:46.801602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 14:45:46.801607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 14:45:46.801611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 14:45:46.801615 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 14:45:46.801619 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 14:45:46.801623 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801631 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 14:45:46.801639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 14:45:46.801644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 14:45:46.801648 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801652 | orchestrator | 2025-05-14 14:45:46.801657 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 14:45:46.801661 | orchestrator | Wednesday 14 May 2025 14:43:59 +0000 (0:00:00.457) 0:00:29.371 ********* 2025-05-14 14:45:46.801665 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:45:46.801670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:45:46.801674 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:45:46.801678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:45:46.801683 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:45:46.801690 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:45:46.801694 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801699 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801703 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 14:45:46.801707 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:45:46.801711 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:45:46.801716 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801720 | orchestrator | 2025-05-14 14:45:46.801724 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 14:45:46.801729 | orchestrator | Wednesday 14 May 2025 14:43:59 +0000 (0:00:00.428) 0:00:29.800 ********* 2025-05-14 14:45:46.801733 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:45:46.801737 | orchestrator | 2025-05-14 14:45:46.801744 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 14:45:46.801748 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.712) 0:00:30.512 ********* 2025-05-14 14:45:46.801753 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801757 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801761 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801766 | orchestrator | 2025-05-14 14:45:46.801770 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 14:45:46.801774 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.326) 0:00:30.839 ********* 2025-05-14 14:45:46.801779 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801783 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801787 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801791 | orchestrator | 2025-05-14 14:45:46.801796 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 14:45:46.801800 | orchestrator | Wednesday 14 May 2025 14:44:00 +0000 (0:00:00.365) 0:00:31.205 ********* 2025-05-14 14:45:46.801804 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801808 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.801813 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.801817 | orchestrator | 2025-05-14 14:45:46.801821 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 14:45:46.801825 | orchestrator | Wednesday 14 May 2025 14:44:01 +0000 (0:00:00.342) 0:00:31.548 ********* 2025-05-14 14:45:46.801833 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801837 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801841 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801845 | orchestrator | 2025-05-14 14:45:46.801849 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 14:45:46.801854 | orchestrator | Wednesday 14 May 2025 14:44:01 +0000 (0:00:00.684) 0:00:32.233 ********* 2025-05-14 14:45:46.801858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:45:46.801871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:45:46.801875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:45:46.801880 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801884 | orchestrator | 2025-05-14 14:45:46.801888 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 14:45:46.801892 | orchestrator | Wednesday 14 May 2025 14:44:02 +0000 (0:00:00.385) 0:00:32.618 ********* 2025-05-14 14:45:46.801897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:45:46.801901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:45:46.801905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:45:46.801909 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801913 | orchestrator | 2025-05-14 14:45:46.801918 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 14:45:46.801922 | orchestrator | Wednesday 14 May 2025 14:44:02 +0000 (0:00:00.391) 0:00:33.009 ********* 2025-05-14 14:45:46.801926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:45:46.801930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:45:46.801934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:45:46.801938 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.801943 | orchestrator | 2025-05-14 14:45:46.801947 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:45:46.801951 | orchestrator | Wednesday 14 May 2025 14:44:03 +0000 (0:00:00.395) 0:00:33.405 ********* 2025-05-14 14:45:46.801955 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:45:46.801960 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:45:46.801964 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:45:46.801968 | orchestrator | 2025-05-14 14:45:46.801973 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 14:45:46.801977 | orchestrator | Wednesday 14 May 2025 14:44:03 +0000 (0:00:00.304) 0:00:33.710 ********* 2025-05-14 14:45:46.801981 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 14:45:46.801986 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 14:45:46.801990 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 14:45:46.801994 | orchestrator | 2025-05-14 14:45:46.801998 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 14:45:46.802002 | orchestrator | Wednesday 14 May 2025 14:44:04 +0000 (0:00:00.919) 0:00:34.630 ********* 2025-05-14 14:45:46.802007 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802011 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802049 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802054 | orchestrator | 2025-05-14 14:45:46.802058 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 14:45:46.802062 | orchestrator | Wednesday 14 May 2025 14:44:04 +0000 (0:00:00.525) 0:00:35.155 ********* 2025-05-14 14:45:46.802066 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802074 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802078 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802083 | orchestrator | 2025-05-14 14:45:46.802087 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 14:45:46.802091 | orchestrator | Wednesday 14 May 2025 14:44:05 +0000 (0:00:00.377) 0:00:35.533 ********* 2025-05-14 14:45:46.802096 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 14:45:46.802105 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802109 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 14:45:46.802113 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802117 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 14:45:46.802121 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802126 | orchestrator | 2025-05-14 14:45:46.802130 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 14:45:46.802134 | orchestrator | Wednesday 14 May 2025 14:44:05 +0000 (0:00:00.636) 0:00:36.170 ********* 2025-05-14 14:45:46.802138 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 14:45:46.802146 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802150 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 14:45:46.802155 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802159 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 14:45:46.802163 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802168 | orchestrator | 2025-05-14 14:45:46.802172 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 14:45:46.802176 | orchestrator | Wednesday 14 May 2025 14:44:06 +0000 (0:00:00.572) 0:00:36.743 ********* 2025-05-14 14:45:46.802180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 14:45:46.802184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 14:45:46.802189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 14:45:46.802193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 14:45:46.802197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 14:45:46.802201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 14:45:46.802205 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 14:45:46.802214 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 14:45:46.802222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 14:45:46.802226 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802231 | orchestrator | 2025-05-14 14:45:46.802235 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 14:45:46.802239 | orchestrator | Wednesday 14 May 2025 14:44:07 +0000 (0:00:00.714) 0:00:37.457 ********* 2025-05-14 14:45:46.802243 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802248 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802252 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:45:46.802256 | orchestrator | 2025-05-14 14:45:46.802260 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 14:45:46.802264 | orchestrator | Wednesday 14 May 2025 14:44:07 +0000 (0:00:00.393) 0:00:37.851 ********* 2025-05-14 14:45:46.802269 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 14:45:46.802273 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:45:46.802277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:45:46.802281 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 14:45:46.802286 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:45:46.802290 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:45:46.802294 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:45:46.802302 | orchestrator | 2025-05-14 14:45:46.802306 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 14:45:46.802310 | orchestrator | Wednesday 14 May 2025 14:44:08 +0000 (0:00:01.085) 0:00:38.937 ********* 2025-05-14 14:45:46.802314 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 14:45:46.802319 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:45:46.802323 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:45:46.802327 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 14:45:46.802332 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:45:46.802336 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:45:46.802340 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:45:46.802344 | orchestrator | 2025-05-14 14:45:46.802348 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-14 14:45:46.802353 | orchestrator | Wednesday 14 May 2025 14:44:10 +0000 (0:00:01.899) 0:00:40.837 ********* 2025-05-14 14:45:46.802357 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:45:46.802364 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:45:46.802368 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-14 14:45:46.802373 | orchestrator | 2025-05-14 14:45:46.802377 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-14 14:45:46.802381 | orchestrator | Wednesday 14 May 2025 14:44:11 +0000 (0:00:00.559) 0:00:41.397 ********* 2025-05-14 14:45:46.802387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:45:46.802397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:45:46.802402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:45:46.802406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:45:46.802410 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 14:45:46.802415 | orchestrator | 2025-05-14 14:45:46.802419 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-14 14:45:46.802423 | orchestrator | Wednesday 14 May 2025 14:44:53 +0000 (0:00:42.289) 0:01:23.687 ********* 2025-05-14 14:45:46.802428 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802432 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802436 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802444 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802448 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802453 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802457 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-14 14:45:46.802461 | orchestrator | 2025-05-14 14:45:46.802465 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-14 14:45:46.802470 | orchestrator | Wednesday 14 May 2025 14:45:14 +0000 (0:00:21.178) 0:01:44.865 ********* 2025-05-14 14:45:46.802474 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802478 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802482 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802487 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802499 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 14:45:46.802504 | orchestrator | 2025-05-14 14:45:46.802508 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-14 14:45:46.802512 | orchestrator | Wednesday 14 May 2025 14:45:24 +0000 (0:00:10.287) 0:01:55.153 ********* 2025-05-14 14:45:46.802516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802521 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802525 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802533 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802538 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802542 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802546 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802554 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802558 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802562 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802567 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802575 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802579 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 14:45:46.802588 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 14:45:46.802592 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 14:45:46.802599 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-14 14:45:46.802603 | orchestrator | 2025-05-14 14:45:46.802608 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:45:46.802612 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 14:45:46.802620 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-14 14:45:46.802625 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-14 14:45:46.802629 | orchestrator | 2025-05-14 14:45:46.802634 | orchestrator | 2025-05-14 14:45:46.802638 | orchestrator | 2025-05-14 14:45:46.802642 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:45:46.802646 | orchestrator | Wednesday 14 May 2025 14:45:43 +0000 (0:00:18.657) 0:02:13.810 ********* 2025-05-14 14:45:46.802651 | orchestrator | =============================================================================== 2025-05-14 14:45:46.802655 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.29s 2025-05-14 14:45:46.802659 | orchestrator | generate keys ---------------------------------------------------------- 21.18s 2025-05-14 14:45:46.802663 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.66s 2025-05-14 14:45:46.802668 | orchestrator | get keys from monitors ------------------------------------------------- 10.29s 2025-05-14 14:45:46.802672 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.44s 2025-05-14 14:45:46.802676 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.93s 2025-05-14 14:45:46.802680 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.90s 2025-05-14 14:45:46.802684 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.67s 2025-05-14 14:45:46.802688 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.44s 2025-05-14 14:45:46.802693 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.09s 2025-05-14 14:45:46.802697 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.01s 2025-05-14 14:45:46.802701 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.00s 2025-05-14 14:45:46.802705 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.92s 2025-05-14 14:45:46.802709 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.84s 2025-05-14 14:45:46.802714 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.81s 2025-05-14 14:45:46.802718 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.80s 2025-05-14 14:45:46.802722 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.78s 2025-05-14 14:45:46.802726 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.78s 2025-05-14 14:45:46.802731 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.71s 2025-05-14 14:45:46.802735 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.71s 2025-05-14 14:45:49.853156 | orchestrator | 2025-05-14 14:45:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:49.857495 | orchestrator | 2025-05-14 14:45:49 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:45:49.857541 | orchestrator | 2025-05-14 14:45:49 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:49.857556 | orchestrator | 2025-05-14 14:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:52.905931 | orchestrator | 2025-05-14 14:45:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:52.906324 | orchestrator | 2025-05-14 14:45:52 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:45:52.906452 | orchestrator | 2025-05-14 14:45:52 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:52.906619 | orchestrator | 2025-05-14 14:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:55.959227 | orchestrator | 2025-05-14 14:45:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:55.961138 | orchestrator | 2025-05-14 14:45:55 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:45:55.962836 | orchestrator | 2025-05-14 14:45:55 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:45:55.964180 | orchestrator | 2025-05-14 14:45:55 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:55.964207 | orchestrator | 2025-05-14 14:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:45:59.008938 | orchestrator | 2025-05-14 14:45:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:45:59.010268 | orchestrator | 2025-05-14 14:45:59 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:45:59.011663 | orchestrator | 2025-05-14 14:45:59 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:45:59.013082 | orchestrator | 2025-05-14 14:45:59 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:45:59.013125 | orchestrator | 2025-05-14 14:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:02.049240 | orchestrator | 2025-05-14 14:46:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:02.049930 | orchestrator | 2025-05-14 14:46:02 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:02.050451 | orchestrator | 2025-05-14 14:46:02 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:02.051404 | orchestrator | 2025-05-14 14:46:02 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:02.051454 | orchestrator | 2025-05-14 14:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:05.107864 | orchestrator | 2025-05-14 14:46:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:05.111256 | orchestrator | 2025-05-14 14:46:05 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:05.112016 | orchestrator | 2025-05-14 14:46:05 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:05.113990 | orchestrator | 2025-05-14 14:46:05 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:05.114172 | orchestrator | 2025-05-14 14:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:08.155027 | orchestrator | 2025-05-14 14:46:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:08.156095 | orchestrator | 2025-05-14 14:46:08 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:08.157119 | orchestrator | 2025-05-14 14:46:08 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:08.158087 | orchestrator | 2025-05-14 14:46:08 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:08.158107 | orchestrator | 2025-05-14 14:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:11.212553 | orchestrator | 2025-05-14 14:46:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:11.215567 | orchestrator | 2025-05-14 14:46:11 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:11.218290 | orchestrator | 2025-05-14 14:46:11 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:11.219766 | orchestrator | 2025-05-14 14:46:11 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:11.219803 | orchestrator | 2025-05-14 14:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:14.271522 | orchestrator | 2025-05-14 14:46:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:14.272549 | orchestrator | 2025-05-14 14:46:14 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:14.273595 | orchestrator | 2025-05-14 14:46:14 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:14.275154 | orchestrator | 2025-05-14 14:46:14 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:14.275419 | orchestrator | 2025-05-14 14:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:17.327499 | orchestrator | 2025-05-14 14:46:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:17.328353 | orchestrator | 2025-05-14 14:46:17 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:17.329637 | orchestrator | 2025-05-14 14:46:17 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:17.331123 | orchestrator | 2025-05-14 14:46:17 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state STARTED 2025-05-14 14:46:17.331342 | orchestrator | 2025-05-14 14:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:20.387266 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:20.390484 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:20.393578 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:20.394659 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:20.401576 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state STARTED 2025-05-14 14:46:20.401656 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:20.403315 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:20.403369 | orchestrator | 2025-05-14 14:46:20 | INFO  | Task 2960211a-300d-4955-b465-86491bad50c5 is in state SUCCESS 2025-05-14 14:46:20.405241 | orchestrator | 2025-05-14 14:46:20.405350 | orchestrator | 2025-05-14 14:46:20.405362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:46:20.405374 | orchestrator | 2025-05-14 14:46:20.405385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:46:20.405396 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.333) 0:00:00.333 ********* 2025-05-14 14:46:20.405408 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.405420 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.405430 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.405441 | orchestrator | 2025-05-14 14:46:20.405452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:46:20.405463 | orchestrator | Wednesday 14 May 2025 14:43:46 +0000 (0:00:00.434) 0:00:00.768 ********* 2025-05-14 14:46:20.405542 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 14:46:20.405554 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 14:46:20.405565 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 14:46:20.405603 | orchestrator | 2025-05-14 14:46:20.405615 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-14 14:46:20.405626 | orchestrator | 2025-05-14 14:46:20.405637 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.405648 | orchestrator | Wednesday 14 May 2025 14:43:47 +0000 (0:00:00.372) 0:00:01.141 ********* 2025-05-14 14:46:20.405660 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:46:20.405672 | orchestrator | 2025-05-14 14:46:20.405684 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-14 14:46:20.405695 | orchestrator | Wednesday 14 May 2025 14:43:48 +0000 (0:00:00.881) 0:00:02.023 ********* 2025-05-14 14:46:20.405713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.405732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.405831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.405862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.405887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.405900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.405937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.405960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.405988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406009 | orchestrator | 2025-05-14 14:46:20.406102 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-14 14:46:20.406137 | orchestrator | Wednesday 14 May 2025 14:43:50 +0000 (0:00:02.541) 0:00:04.564 ********* 2025-05-14 14:46:20.406158 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-14 14:46:20.406194 | orchestrator | 2025-05-14 14:46:20.406215 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-14 14:46:20.406235 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.588) 0:00:05.153 ********* 2025-05-14 14:46:20.406254 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.406273 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.406290 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.406308 | orchestrator | 2025-05-14 14:46:20.406327 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-14 14:46:20.406346 | orchestrator | Wednesday 14 May 2025 14:43:51 +0000 (0:00:00.474) 0:00:05.627 ********* 2025-05-14 14:46:20.406366 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:46:20.406387 | orchestrator | 2025-05-14 14:46:20.406406 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.406425 | orchestrator | Wednesday 14 May 2025 14:43:52 +0000 (0:00:00.447) 0:00:06.074 ********* 2025-05-14 14:46:20.406444 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:46:20.406464 | orchestrator | 2025-05-14 14:46:20.406483 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-14 14:46:20.406502 | orchestrator | Wednesday 14 May 2025 14:43:52 +0000 (0:00:00.720) 0:00:06.795 ********* 2025-05-14 14:46:20.406525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.406548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.406594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.406632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.406770 | orchestrator | 2025-05-14 14:46:20.406791 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-14 14:46:20.406810 | orchestrator | Wednesday 14 May 2025 14:43:56 +0000 (0:00:03.155) 0:00:09.951 ********* 2025-05-14 14:46:20.406845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.406909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.406988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407001 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.407014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.407043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.407067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.407095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.407116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.407135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407153 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.407172 | orchestrator | 2025-05-14 14:46:20.407189 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-14 14:46:20.407209 | orchestrator | Wednesday 14 May 2025 14:43:57 +0000 (0:00:01.453) 0:00:11.404 ********* 2025-05-14 14:46:20.407261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.407299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.407322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.407367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.407390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.407430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407448 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.407482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 14:46:20.407504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.407523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 14:46:20.407542 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.407562 | orchestrator | 2025-05-14 14:46:20.407581 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-14 14:46:20.407601 | orchestrator | Wednesday 14 May 2025 14:43:58 +0000 (0:00:01.280) 0:00:12.684 ********* 2025-05-14 14:46:20.407622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.407666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.407704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.407724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.407886 | orchestrator | 2025-05-14 14:46:20.407906 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-14 14:46:20.408085 | orchestrator | Wednesday 14 May 2025 14:44:02 +0000 (0:00:03.556) 0:00:16.240 ********* 2025-05-14 14:46:20.408106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408252 | orchestrator | 2025-05-14 14:46:20.408262 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-14 14:46:20.408272 | orchestrator | Wednesday 14 May 2025 14:44:10 +0000 (0:00:07.947) 0:00:24.188 ********* 2025-05-14 14:46:20.408282 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.408292 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:46:20.408301 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:46:20.408311 | orchestrator | 2025-05-14 14:46:20.408321 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-14 14:46:20.408331 | orchestrator | Wednesday 14 May 2025 14:44:12 +0000 (0:00:02.348) 0:00:26.536 ********* 2025-05-14 14:46:20.408340 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.408350 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.408359 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.408369 | orchestrator | 2025-05-14 14:46:20.408384 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-14 14:46:20.408394 | orchestrator | Wednesday 14 May 2025 14:44:13 +0000 (0:00:00.869) 0:00:27.406 ********* 2025-05-14 14:46:20.408404 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.408413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.408422 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.408432 | orchestrator | 2025-05-14 14:46:20.408442 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-14 14:46:20.408451 | orchestrator | Wednesday 14 May 2025 14:44:14 +0000 (0:00:00.516) 0:00:27.922 ********* 2025-05-14 14:46:20.408461 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.408470 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.408479 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.408489 | orchestrator | 2025-05-14 14:46:20.408498 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-14 14:46:20.408508 | orchestrator | Wednesday 14 May 2025 14:44:14 +0000 (0:00:00.483) 0:00:28.406 ********* 2025-05-14 14:46:20.408518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.408597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 14:46:20.408608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.408643 | orchestrator | 2025-05-14 14:46:20.408653 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.408662 | orchestrator | Wednesday 14 May 2025 14:44:17 +0000 (0:00:02.580) 0:00:30.986 ********* 2025-05-14 14:46:20.408672 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.408682 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.408692 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.408701 | orchestrator | 2025-05-14 14:46:20.408711 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-14 14:46:20.408721 | orchestrator | Wednesday 14 May 2025 14:44:17 +0000 (0:00:00.428) 0:00:31.414 ********* 2025-05-14 14:46:20.408731 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 14:46:20.408741 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 14:46:20.408756 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 14:46:20.408766 | orchestrator | 2025-05-14 14:46:20.408776 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-14 14:46:20.408785 | orchestrator | Wednesday 14 May 2025 14:44:19 +0000 (0:00:02.222) 0:00:33.637 ********* 2025-05-14 14:46:20.408795 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:46:20.408804 | orchestrator | 2025-05-14 14:46:20.408825 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-14 14:46:20.408835 | orchestrator | Wednesday 14 May 2025 14:44:20 +0000 (0:00:00.780) 0:00:34.417 ********* 2025-05-14 14:46:20.408844 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.408854 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.408863 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.408873 | orchestrator | 2025-05-14 14:46:20.408882 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-14 14:46:20.408892 | orchestrator | Wednesday 14 May 2025 14:44:21 +0000 (0:00:00.891) 0:00:35.309 ********* 2025-05-14 14:46:20.408901 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 14:46:20.408911 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:46:20.408963 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 14:46:20.408980 | orchestrator | 2025-05-14 14:46:20.408996 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-14 14:46:20.409012 | orchestrator | Wednesday 14 May 2025 14:44:22 +0000 (0:00:01.166) 0:00:36.475 ********* 2025-05-14 14:46:20.409023 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.409032 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.409041 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.409051 | orchestrator | 2025-05-14 14:46:20.409060 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-14 14:46:20.409069 | orchestrator | Wednesday 14 May 2025 14:44:22 +0000 (0:00:00.344) 0:00:36.820 ********* 2025-05-14 14:46:20.409079 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 14:46:20.409088 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 14:46:20.409098 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 14:46:20.409107 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 14:46:20.409117 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 14:46:20.409126 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 14:46:20.409136 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 14:46:20.409145 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 14:46:20.409155 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 14:46:20.409164 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 14:46:20.409174 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 14:46:20.409183 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 14:46:20.409192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 14:46:20.409202 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 14:46:20.409211 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 14:46:20.409221 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:46:20.409230 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:46:20.409240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:46:20.409250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:46:20.409259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:46:20.409284 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:46:20.409294 | orchestrator | 2025-05-14 14:46:20.409304 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-14 14:46:20.409313 | orchestrator | Wednesday 14 May 2025 14:44:33 +0000 (0:00:10.240) 0:00:47.060 ********* 2025-05-14 14:46:20.409323 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:46:20.409332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:46:20.409341 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:46:20.409351 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:46:20.409361 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:46:20.409377 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:46:20.409387 | orchestrator | 2025-05-14 14:46:20.409397 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-14 14:46:20.409406 | orchestrator | Wednesday 14 May 2025 14:44:36 +0000 (0:00:03.096) 0:00:50.157 ********* 2025-05-14 14:46:20.409417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.409430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.409441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 14:46:20.409464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 14:46:20.409542 | orchestrator | 2025-05-14 14:46:20.409553 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.409564 | orchestrator | Wednesday 14 May 2025 14:44:39 +0000 (0:00:02.935) 0:00:53.092 ********* 2025-05-14 14:46:20.409575 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.409585 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.409596 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.409607 | orchestrator | 2025-05-14 14:46:20.409618 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-14 14:46:20.409633 | orchestrator | Wednesday 14 May 2025 14:44:39 +0000 (0:00:00.307) 0:00:53.400 ********* 2025-05-14 14:46:20.409644 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.409655 | orchestrator | 2025-05-14 14:46:20.409666 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-14 14:46:20.409676 | orchestrator | Wednesday 14 May 2025 14:44:42 +0000 (0:00:02.724) 0:00:56.125 ********* 2025-05-14 14:46:20.409687 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.409698 | orchestrator | 2025-05-14 14:46:20.409708 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-14 14:46:20.409719 | orchestrator | Wednesday 14 May 2025 14:44:44 +0000 (0:00:02.245) 0:00:58.370 ********* 2025-05-14 14:46:20.409729 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.409740 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.409751 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.409761 | orchestrator | 2025-05-14 14:46:20.409771 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-14 14:46:20.409782 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:00.898) 0:00:59.268 ********* 2025-05-14 14:46:20.409793 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.409809 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.409820 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.409831 | orchestrator | 2025-05-14 14:46:20.409842 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-14 14:46:20.409853 | orchestrator | Wednesday 14 May 2025 14:44:45 +0000 (0:00:00.292) 0:00:59.561 ********* 2025-05-14 14:46:20.409863 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.409874 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.409884 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.409895 | orchestrator | 2025-05-14 14:46:20.409906 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-14 14:46:20.409942 | orchestrator | Wednesday 14 May 2025 14:44:46 +0000 (0:00:00.606) 0:01:00.168 ********* 2025-05-14 14:46:20.409954 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.409965 | orchestrator | 2025-05-14 14:46:20.409976 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-14 14:46:20.409987 | orchestrator | Wednesday 14 May 2025 14:44:59 +0000 (0:00:13.564) 0:01:13.732 ********* 2025-05-14 14:46:20.409997 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.410008 | orchestrator | 2025-05-14 14:46:20.410061 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 14:46:20.410072 | orchestrator | Wednesday 14 May 2025 14:45:09 +0000 (0:00:09.921) 0:01:23.653 ********* 2025-05-14 14:46:20.410083 | orchestrator | 2025-05-14 14:46:20.410094 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 14:46:20.410105 | orchestrator | Wednesday 14 May 2025 14:45:09 +0000 (0:00:00.053) 0:01:23.707 ********* 2025-05-14 14:46:20.410116 | orchestrator | 2025-05-14 14:46:20.410126 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 14:46:20.410137 | orchestrator | Wednesday 14 May 2025 14:45:09 +0000 (0:00:00.053) 0:01:23.760 ********* 2025-05-14 14:46:20.410157 | orchestrator | 2025-05-14 14:46:20.410167 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-14 14:46:20.410178 | orchestrator | Wednesday 14 May 2025 14:45:09 +0000 (0:00:00.055) 0:01:23.815 ********* 2025-05-14 14:46:20.410189 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.410200 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:46:20.410211 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:46:20.410221 | orchestrator | 2025-05-14 14:46:20.410232 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-14 14:46:20.410243 | orchestrator | Wednesday 14 May 2025 14:45:19 +0000 (0:00:09.308) 0:01:33.124 ********* 2025-05-14 14:46:20.410253 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:46:20.410264 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:46:20.410275 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.410285 | orchestrator | 2025-05-14 14:46:20.410296 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-14 14:46:20.410307 | orchestrator | Wednesday 14 May 2025 14:45:27 +0000 (0:00:08.098) 0:01:41.223 ********* 2025-05-14 14:46:20.410318 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.410328 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:46:20.410339 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:46:20.410350 | orchestrator | 2025-05-14 14:46:20.410361 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.410371 | orchestrator | Wednesday 14 May 2025 14:45:32 +0000 (0:00:05.472) 0:01:46.695 ********* 2025-05-14 14:46:20.410382 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:46:20.410393 | orchestrator | 2025-05-14 14:46:20.410404 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-14 14:46:20.410415 | orchestrator | Wednesday 14 May 2025 14:45:33 +0000 (0:00:00.726) 0:01:47.422 ********* 2025-05-14 14:46:20.410426 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:46:20.410437 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.410447 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:46:20.410458 | orchestrator | 2025-05-14 14:46:20.410468 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-14 14:46:20.410479 | orchestrator | Wednesday 14 May 2025 14:45:34 +0000 (0:00:01.016) 0:01:48.439 ********* 2025-05-14 14:46:20.410490 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:46:20.410501 | orchestrator | 2025-05-14 14:46:20.410512 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-14 14:46:20.410523 | orchestrator | Wednesday 14 May 2025 14:45:36 +0000 (0:00:01.510) 0:01:49.949 ********* 2025-05-14 14:46:20.410534 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-14 14:46:20.410545 | orchestrator | 2025-05-14 14:46:20.410555 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-14 14:46:20.410566 | orchestrator | Wednesday 14 May 2025 14:45:46 +0000 (0:00:10.674) 0:02:00.624 ********* 2025-05-14 14:46:20.410577 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-14 14:46:20.410587 | orchestrator | 2025-05-14 14:46:20.410598 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-14 14:46:20.410614 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:20.163) 0:02:20.787 ********* 2025-05-14 14:46:20.410625 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-14 14:46:20.410636 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-14 14:46:20.410647 | orchestrator | 2025-05-14 14:46:20.410658 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-14 14:46:20.410669 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:07.185) 0:02:27.972 ********* 2025-05-14 14:46:20.410680 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.410691 | orchestrator | 2025-05-14 14:46:20.410702 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-14 14:46:20.410717 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:00.120) 0:02:28.093 ********* 2025-05-14 14:46:20.410728 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.410739 | orchestrator | 2025-05-14 14:46:20.410750 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-14 14:46:20.410768 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:00.128) 0:02:28.222 ********* 2025-05-14 14:46:20.410780 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.410790 | orchestrator | 2025-05-14 14:46:20.410801 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-14 14:46:20.410812 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:00.118) 0:02:28.340 ********* 2025-05-14 14:46:20.410822 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.410833 | orchestrator | 2025-05-14 14:46:20.410843 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-14 14:46:20.410854 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:00.432) 0:02:28.773 ********* 2025-05-14 14:46:20.410865 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:20.410875 | orchestrator | 2025-05-14 14:46:20.410886 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 14:46:20.410897 | orchestrator | Wednesday 14 May 2025 14:46:18 +0000 (0:00:03.340) 0:02:32.113 ********* 2025-05-14 14:46:20.410908 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:20.410956 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:46:20.410967 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:46:20.410978 | orchestrator | 2025-05-14 14:46:20.410989 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:46:20.411000 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 14:46:20.411012 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 14:46:20.411023 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 14:46:20.411034 | orchestrator | 2025-05-14 14:46:20.411045 | orchestrator | 2025-05-14 14:46:20.411055 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:46:20.411066 | orchestrator | Wednesday 14 May 2025 14:46:18 +0000 (0:00:00.545) 0:02:32.659 ********* 2025-05-14 14:46:20.411076 | orchestrator | =============================================================================== 2025-05-14 14:46:20.411087 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.16s 2025-05-14 14:46:20.411098 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.56s 2025-05-14 14:46:20.411109 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.67s 2025-05-14 14:46:20.411120 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.24s 2025-05-14 14:46:20.411131 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.92s 2025-05-14 14:46:20.411141 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.31s 2025-05-14 14:46:20.411153 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 8.10s 2025-05-14 14:46:20.411164 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.95s 2025-05-14 14:46:20.411175 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.19s 2025-05-14 14:46:20.411186 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.47s 2025-05-14 14:46:20.411197 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.56s 2025-05-14 14:46:20.411208 | orchestrator | keystone : Creating default user role ----------------------------------- 3.34s 2025-05-14 14:46:20.411219 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2025-05-14 14:46:20.411238 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.10s 2025-05-14 14:46:20.411249 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.94s 2025-05-14 14:46:20.411260 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.72s 2025-05-14 14:46:20.411271 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.58s 2025-05-14 14:46:20.411282 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.54s 2025-05-14 14:46:20.411292 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.35s 2025-05-14 14:46:20.411303 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.25s 2025-05-14 14:46:20.411314 | orchestrator | 2025-05-14 14:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:23.437002 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:23.437452 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:23.438162 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:23.439106 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state STARTED 2025-05-14 14:46:23.440085 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task a618c0db-601e-441f-85c8-ee1a503621f8 is in state SUCCESS 2025-05-14 14:46:23.441639 | orchestrator | 2025-05-14 14:46:23.441665 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:46:23.441677 | orchestrator | 2025-05-14 14:46:23.441688 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-14 14:46:23.441700 | orchestrator | 2025-05-14 14:46:23.441711 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 14:46:23.441722 | orchestrator | Wednesday 14 May 2025 14:45:55 +0000 (0:00:00.453) 0:00:00.453 ********* 2025-05-14 14:46:23.441733 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-14 14:46:23.441744 | orchestrator | 2025-05-14 14:46:23.441755 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 14:46:23.441766 | orchestrator | Wednesday 14 May 2025 14:45:56 +0000 (0:00:00.199) 0:00:00.653 ********* 2025-05-14 14:46:23.441777 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.441788 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 14:46:23.441799 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 14:46:23.441810 | orchestrator | 2025-05-14 14:46:23.441820 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 14:46:23.441831 | orchestrator | Wednesday 14 May 2025 14:45:56 +0000 (0:00:00.810) 0:00:01.464 ********* 2025-05-14 14:46:23.441842 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-14 14:46:23.441853 | orchestrator | 2025-05-14 14:46:23.441864 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 14:46:23.441876 | orchestrator | Wednesday 14 May 2025 14:45:57 +0000 (0:00:00.231) 0:00:01.696 ********* 2025-05-14 14:46:23.441887 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.441897 | orchestrator | 2025-05-14 14:46:23.441908 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 14:46:23.441945 | orchestrator | Wednesday 14 May 2025 14:45:57 +0000 (0:00:00.593) 0:00:02.289 ********* 2025-05-14 14:46:23.441957 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.441968 | orchestrator | 2025-05-14 14:46:23.441979 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 14:46:23.441990 | orchestrator | Wednesday 14 May 2025 14:45:57 +0000 (0:00:00.124) 0:00:02.414 ********* 2025-05-14 14:46:23.442060 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442075 | orchestrator | 2025-05-14 14:46:23.442086 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 14:46:23.442097 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.449) 0:00:02.864 ********* 2025-05-14 14:46:23.442108 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442118 | orchestrator | 2025-05-14 14:46:23.442129 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 14:46:23.442140 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.139) 0:00:03.004 ********* 2025-05-14 14:46:23.442151 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442161 | orchestrator | 2025-05-14 14:46:23.442172 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 14:46:23.442183 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.128) 0:00:03.132 ********* 2025-05-14 14:46:23.442194 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442204 | orchestrator | 2025-05-14 14:46:23.442215 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 14:46:23.442226 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.148) 0:00:03.281 ********* 2025-05-14 14:46:23.442236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.442248 | orchestrator | 2025-05-14 14:46:23.442265 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 14:46:23.442276 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.143) 0:00:03.425 ********* 2025-05-14 14:46:23.442287 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442297 | orchestrator | 2025-05-14 14:46:23.442308 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 14:46:23.442319 | orchestrator | Wednesday 14 May 2025 14:45:58 +0000 (0:00:00.109) 0:00:03.535 ********* 2025-05-14 14:46:23.442330 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.442341 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:46:23.442352 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:46:23.442363 | orchestrator | 2025-05-14 14:46:23.442373 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 14:46:23.442384 | orchestrator | Wednesday 14 May 2025 14:45:59 +0000 (0:00:00.869) 0:00:04.404 ********* 2025-05-14 14:46:23.442394 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.442405 | orchestrator | 2025-05-14 14:46:23.442416 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 14:46:23.442427 | orchestrator | Wednesday 14 May 2025 14:46:00 +0000 (0:00:00.237) 0:00:04.642 ********* 2025-05-14 14:46:23.442438 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.442449 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:46:23.442482 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:46:23.442501 | orchestrator | 2025-05-14 14:46:23.442520 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 14:46:23.442540 | orchestrator | Wednesday 14 May 2025 14:46:01 +0000 (0:00:01.917) 0:00:06.560 ********* 2025-05-14 14:46:23.442561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:46:23.442582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:46:23.442601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:46:23.442613 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.442624 | orchestrator | 2025-05-14 14:46:23.442635 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 14:46:23.442660 | orchestrator | Wednesday 14 May 2025 14:46:02 +0000 (0:00:00.464) 0:00:07.025 ********* 2025-05-14 14:46:23.442674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442698 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442710 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442721 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.442732 | orchestrator | 2025-05-14 14:46:23.442743 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 14:46:23.442753 | orchestrator | Wednesday 14 May 2025 14:46:03 +0000 (0:00:00.779) 0:00:07.804 ********* 2025-05-14 14:46:23.442766 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442791 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 14:46:23.442803 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.442813 | orchestrator | 2025-05-14 14:46:23.442824 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 14:46:23.442835 | orchestrator | Wednesday 14 May 2025 14:46:03 +0000 (0:00:00.160) 0:00:07.965 ********* 2025-05-14 14:46:23.442847 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '5e2cf110b535', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 14:46:00.682321', 'end': '2025-05-14 14:46:00.722012', 'delta': '0:00:00.039691', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5e2cf110b535'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 14:46:23.442867 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a0f763ca12a4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 14:46:01.277222', 'end': '2025-05-14 14:46:01.311445', 'delta': '0:00:00.034223', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a0f763ca12a4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 14:46:23.442894 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'd9558916fb2d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 14:46:01.812374', 'end': '2025-05-14 14:46:01.849721', 'delta': '0:00:00.037347', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d9558916fb2d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 14:46:23.442906 | orchestrator | 2025-05-14 14:46:23.443055 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 14:46:23.443174 | orchestrator | Wednesday 14 May 2025 14:46:03 +0000 (0:00:00.208) 0:00:08.173 ********* 2025-05-14 14:46:23.443190 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.443203 | orchestrator | 2025-05-14 14:46:23.443214 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 14:46:23.443226 | orchestrator | Wednesday 14 May 2025 14:46:03 +0000 (0:00:00.249) 0:00:08.422 ********* 2025-05-14 14:46:23.443237 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-14 14:46:23.443248 | orchestrator | 2025-05-14 14:46:23.443259 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 14:46:23.443270 | orchestrator | Wednesday 14 May 2025 14:46:05 +0000 (0:00:01.617) 0:00:10.039 ********* 2025-05-14 14:46:23.443282 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443293 | orchestrator | 2025-05-14 14:46:23.443304 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 14:46:23.443315 | orchestrator | Wednesday 14 May 2025 14:46:05 +0000 (0:00:00.119) 0:00:10.159 ********* 2025-05-14 14:46:23.443326 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443337 | orchestrator | 2025-05-14 14:46:23.443347 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:46:23.443358 | orchestrator | Wednesday 14 May 2025 14:46:05 +0000 (0:00:00.220) 0:00:10.380 ********* 2025-05-14 14:46:23.443369 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443379 | orchestrator | 2025-05-14 14:46:23.443390 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 14:46:23.443400 | orchestrator | Wednesday 14 May 2025 14:46:05 +0000 (0:00:00.134) 0:00:10.514 ********* 2025-05-14 14:46:23.443411 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.443422 | orchestrator | 2025-05-14 14:46:23.443432 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 14:46:23.443443 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.145) 0:00:10.660 ********* 2025-05-14 14:46:23.443454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443464 | orchestrator | 2025-05-14 14:46:23.443475 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 14:46:23.443486 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.213) 0:00:10.874 ********* 2025-05-14 14:46:23.443497 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443508 | orchestrator | 2025-05-14 14:46:23.443518 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 14:46:23.443529 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.121) 0:00:10.995 ********* 2025-05-14 14:46:23.443540 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443551 | orchestrator | 2025-05-14 14:46:23.443561 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 14:46:23.443572 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.127) 0:00:11.123 ********* 2025-05-14 14:46:23.443583 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443593 | orchestrator | 2025-05-14 14:46:23.443634 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 14:46:23.443645 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.120) 0:00:11.243 ********* 2025-05-14 14:46:23.443656 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443666 | orchestrator | 2025-05-14 14:46:23.443677 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 14:46:23.443688 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.128) 0:00:11.372 ********* 2025-05-14 14:46:23.443699 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443709 | orchestrator | 2025-05-14 14:46:23.443720 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 14:46:23.443730 | orchestrator | Wednesday 14 May 2025 14:46:06 +0000 (0:00:00.130) 0:00:11.502 ********* 2025-05-14 14:46:23.443741 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443751 | orchestrator | 2025-05-14 14:46:23.443762 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 14:46:23.443773 | orchestrator | Wednesday 14 May 2025 14:46:07 +0000 (0:00:00.300) 0:00:11.803 ********* 2025-05-14 14:46:23.443784 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.443795 | orchestrator | 2025-05-14 14:46:23.443805 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 14:46:23.443816 | orchestrator | Wednesday 14 May 2025 14:46:07 +0000 (0:00:00.131) 0:00:11.934 ********* 2025-05-14 14:46:23.443844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.443998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 14:46:23.444032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d5e4da1-02cf-44be-b84d-bc2f28a5f03a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:46:23.444049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-13-49-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 14:46:23.444062 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444074 | orchestrator | 2025-05-14 14:46:23.444085 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 14:46:23.444096 | orchestrator | Wednesday 14 May 2025 14:46:07 +0000 (0:00:00.255) 0:00:12.190 ********* 2025-05-14 14:46:23.444107 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444125 | orchestrator | 2025-05-14 14:46:23.444136 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 14:46:23.444147 | orchestrator | Wednesday 14 May 2025 14:46:07 +0000 (0:00:00.239) 0:00:12.429 ********* 2025-05-14 14:46:23.444158 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444169 | orchestrator | 2025-05-14 14:46:23.444180 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 14:46:23.444190 | orchestrator | Wednesday 14 May 2025 14:46:07 +0000 (0:00:00.123) 0:00:12.553 ********* 2025-05-14 14:46:23.444201 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444212 | orchestrator | 2025-05-14 14:46:23.444223 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 14:46:23.444233 | orchestrator | Wednesday 14 May 2025 14:46:08 +0000 (0:00:00.128) 0:00:12.682 ********* 2025-05-14 14:46:23.444244 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.444255 | orchestrator | 2025-05-14 14:46:23.444266 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 14:46:23.444277 | orchestrator | Wednesday 14 May 2025 14:46:08 +0000 (0:00:00.490) 0:00:13.173 ********* 2025-05-14 14:46:23.444287 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.444298 | orchestrator | 2025-05-14 14:46:23.444309 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:46:23.444319 | orchestrator | Wednesday 14 May 2025 14:46:08 +0000 (0:00:00.123) 0:00:13.296 ********* 2025-05-14 14:46:23.444330 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.444341 | orchestrator | 2025-05-14 14:46:23.444352 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:46:23.444362 | orchestrator | Wednesday 14 May 2025 14:46:09 +0000 (0:00:00.479) 0:00:13.776 ********* 2025-05-14 14:46:23.444373 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.444384 | orchestrator | 2025-05-14 14:46:23.444395 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 14:46:23.444406 | orchestrator | Wednesday 14 May 2025 14:46:09 +0000 (0:00:00.151) 0:00:13.927 ********* 2025-05-14 14:46:23.444417 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444428 | orchestrator | 2025-05-14 14:46:23.444438 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 14:46:23.444449 | orchestrator | Wednesday 14 May 2025 14:46:09 +0000 (0:00:00.594) 0:00:14.522 ********* 2025-05-14 14:46:23.444460 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444471 | orchestrator | 2025-05-14 14:46:23.444482 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 14:46:23.444493 | orchestrator | Wednesday 14 May 2025 14:46:10 +0000 (0:00:00.168) 0:00:14.691 ********* 2025-05-14 14:46:23.444503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:46:23.444514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:46:23.444529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:46:23.444540 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444551 | orchestrator | 2025-05-14 14:46:23.444562 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 14:46:23.444572 | orchestrator | Wednesday 14 May 2025 14:46:10 +0000 (0:00:00.432) 0:00:15.123 ********* 2025-05-14 14:46:23.444583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:46:23.444594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:46:23.444605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:46:23.444616 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444626 | orchestrator | 2025-05-14 14:46:23.444643 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 14:46:23.444655 | orchestrator | Wednesday 14 May 2025 14:46:10 +0000 (0:00:00.439) 0:00:15.563 ********* 2025-05-14 14:46:23.444666 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.444683 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 14:46:23.444694 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 14:46:23.444704 | orchestrator | 2025-05-14 14:46:23.444715 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 14:46:23.444725 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:01.075) 0:00:16.638 ********* 2025-05-14 14:46:23.444736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:46:23.444746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:46:23.444757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:46:23.444768 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444778 | orchestrator | 2025-05-14 14:46:23.444789 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 14:46:23.444800 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:00.215) 0:00:16.853 ********* 2025-05-14 14:46:23.444810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 14:46:23.444821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 14:46:23.444831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 14:46:23.444842 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444853 | orchestrator | 2025-05-14 14:46:23.444863 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 14:46:23.444874 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:00.205) 0:00:17.059 ********* 2025-05-14 14:46:23.444885 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 14:46:23.444896 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 14:46:23.444908 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 14:46:23.444935 | orchestrator | 2025-05-14 14:46:23.444947 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 14:46:23.444958 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:00.191) 0:00:17.251 ********* 2025-05-14 14:46:23.444969 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.444980 | orchestrator | 2025-05-14 14:46:23.444991 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 14:46:23.445001 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:00.139) 0:00:17.391 ********* 2025-05-14 14:46:23.445024 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:46:23.445036 | orchestrator | 2025-05-14 14:46:23.445056 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 14:46:23.445067 | orchestrator | Wednesday 14 May 2025 14:46:12 +0000 (0:00:00.128) 0:00:17.519 ********* 2025-05-14 14:46:23.445078 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.445089 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:46:23.445100 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:46:23.445111 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 14:46:23.445122 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:46:23.445133 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:46:23.445144 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:46:23.445154 | orchestrator | 2025-05-14 14:46:23.445165 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 14:46:23.445175 | orchestrator | Wednesday 14 May 2025 14:46:14 +0000 (0:00:01.137) 0:00:18.656 ********* 2025-05-14 14:46:23.445186 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 14:46:23.445197 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 14:46:23.445214 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 14:46:23.445225 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 14:46:23.445236 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 14:46:23.445246 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 14:46:23.445257 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 14:46:23.445268 | orchestrator | 2025-05-14 14:46:23.445279 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-14 14:46:23.445289 | orchestrator | Wednesday 14 May 2025 14:46:15 +0000 (0:00:01.519) 0:00:20.176 ********* 2025-05-14 14:46:23.445300 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:46:23.445311 | orchestrator | 2025-05-14 14:46:23.445322 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-14 14:46:23.445333 | orchestrator | Wednesday 14 May 2025 14:46:16 +0000 (0:00:00.513) 0:00:20.689 ********* 2025-05-14 14:46:23.445344 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:46:23.445355 | orchestrator | 2025-05-14 14:46:23.445366 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-14 14:46:23.445378 | orchestrator | Wednesday 14 May 2025 14:46:16 +0000 (0:00:00.582) 0:00:21.272 ********* 2025-05-14 14:46:23.445395 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-14 14:46:23.445406 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-14 14:46:23.445417 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-14 14:46:23.445428 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-14 14:46:23.445438 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-14 14:46:23.445449 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-14 14:46:23.445460 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-14 14:46:23.445470 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-14 14:46:23.445481 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-14 14:46:23.445492 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-14 14:46:23.445503 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-14 14:46:23.445513 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-14 14:46:23.445524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-14 14:46:23.445535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-14 14:46:23.445546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-14 14:46:23.445557 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-14 14:46:23.445567 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-14 14:46:23.445578 | orchestrator | 2025-05-14 14:46:23.445589 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:46:23.445600 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 14:46:23.445612 | orchestrator | 2025-05-14 14:46:23.445622 | orchestrator | 2025-05-14 14:46:23.445633 | orchestrator | 2025-05-14 14:46:23.445644 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:46:23.445654 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:06.312) 0:00:27.585 ********* 2025-05-14 14:46:23.445678 | orchestrator | =============================================================================== 2025-05-14 14:46:23.445688 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.31s 2025-05-14 14:46:23.445700 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.92s 2025-05-14 14:46:23.445710 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.62s 2025-05-14 14:46:23.445753 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.52s 2025-05-14 14:46:23.445765 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.14s 2025-05-14 14:46:23.445776 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.08s 2025-05-14 14:46:23.445786 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.87s 2025-05-14 14:46:23.445797 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.81s 2025-05-14 14:46:23.445808 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.78s 2025-05-14 14:46:23.445819 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.59s 2025-05-14 14:46:23.445829 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.59s 2025-05-14 14:46:23.445840 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.58s 2025-05-14 14:46:23.445851 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.51s 2025-05-14 14:46:23.445861 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.49s 2025-05-14 14:46:23.445872 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.48s 2025-05-14 14:46:23.445883 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.46s 2025-05-14 14:46:23.445894 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-05-14 14:46:23.445904 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.44s 2025-05-14 14:46:23.445915 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.43s 2025-05-14 14:46:23.445942 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.30s 2025-05-14 14:46:23.445957 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:23.446267 | orchestrator | 2025-05-14 14:46:23 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:23.446292 | orchestrator | 2025-05-14 14:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:26.478813 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:26.481835 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:26.483604 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:26.486432 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task bf62e568-a9f2-4883-b7f0-c5608936ab48 is in state SUCCESS 2025-05-14 14:46:26.490964 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:26.494146 | orchestrator | 2025-05-14 14:46:26 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:26.494840 | orchestrator | 2025-05-14 14:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:29.539333 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:29.540505 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:29.542323 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:29.543786 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:29.545474 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:29.547103 | orchestrator | 2025-05-14 14:46:29 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:29.547296 | orchestrator | 2025-05-14 14:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:32.588304 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:32.589133 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:32.592093 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:32.593430 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:32.595625 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:32.596978 | orchestrator | 2025-05-14 14:46:32 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:32.597301 | orchestrator | 2025-05-14 14:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:35.653688 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:35.653853 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:35.654360 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:35.655180 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:35.655978 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:35.656315 | orchestrator | 2025-05-14 14:46:35 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:35.656330 | orchestrator | 2025-05-14 14:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:38.707711 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:38.712038 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:38.713160 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:38.714277 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:38.717350 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:38.722141 | orchestrator | 2025-05-14 14:46:38 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:38.722205 | orchestrator | 2025-05-14 14:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:41.765846 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:41.766436 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:41.766805 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:41.767780 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:41.768620 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:41.769844 | orchestrator | 2025-05-14 14:46:41 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:41.769866 | orchestrator | 2025-05-14 14:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:44.813112 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:44.814228 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:44.817071 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:44.819488 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:44.820458 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:44.821664 | orchestrator | 2025-05-14 14:46:44 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:44.821677 | orchestrator | 2025-05-14 14:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:47.864394 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:47.866475 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:47.867121 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:47.868400 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:47.869481 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:47.870601 | orchestrator | 2025-05-14 14:46:47 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:47.870623 | orchestrator | 2025-05-14 14:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:50.913681 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:50.915164 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:50.916011 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:50.917026 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:50.918642 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:50.920555 | orchestrator | 2025-05-14 14:46:50 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:50.920625 | orchestrator | 2025-05-14 14:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:53.976210 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:53.976699 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:53.980564 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:53.982608 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:53.984164 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:53.985677 | orchestrator | 2025-05-14 14:46:53 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:53.985703 | orchestrator | 2025-05-14 14:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:46:57.038641 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:46:57.039034 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state STARTED 2025-05-14 14:46:57.040867 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:46:57.040960 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:46:57.041223 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:46:57.041873 | orchestrator | 2025-05-14 14:46:57 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:46:57.042127 | orchestrator | 2025-05-14 14:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:00.097278 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:00.099096 | orchestrator | 2025-05-14 14:47:00.099127 | orchestrator | 2025-05-14 14:47:00.099137 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-14 14:47:00.099146 | orchestrator | 2025-05-14 14:47:00.099154 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-14 14:47:00.099162 | orchestrator | Wednesday 14 May 2025 14:45:47 +0000 (0:00:00.139) 0:00:00.139 ********* 2025-05-14 14:47:00.099170 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 14:47:00.099178 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099186 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099194 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 14:47:00.099202 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099209 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 14:47:00.099217 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 14:47:00.099224 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 14:47:00.099232 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 14:47:00.099240 | orchestrator | 2025-05-14 14:47:00.099248 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-14 14:47:00.099256 | orchestrator | Wednesday 14 May 2025 14:45:50 +0000 (0:00:02.966) 0:00:03.106 ********* 2025-05-14 14:47:00.099263 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 14:47:00.099271 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099279 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099287 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 14:47:00.099294 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 14:47:00.099324 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 14:47:00.099331 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 14:47:00.099339 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 14:47:00.099346 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 14:47:00.099354 | orchestrator | 2025-05-14 14:47:00.099361 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-14 14:47:00.099369 | orchestrator | Wednesday 14 May 2025 14:45:50 +0000 (0:00:00.236) 0:00:03.342 ********* 2025-05-14 14:47:00.099377 | orchestrator | ok: [testbed-manager] => { 2025-05-14 14:47:00.099387 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-14 14:47:00.099398 | orchestrator | } 2025-05-14 14:47:00.099405 | orchestrator | 2025-05-14 14:47:00.099411 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-14 14:47:00.099418 | orchestrator | Wednesday 14 May 2025 14:45:50 +0000 (0:00:00.166) 0:00:03.508 ********* 2025-05-14 14:47:00.099426 | orchestrator | changed: [testbed-manager] 2025-05-14 14:47:00.099434 | orchestrator | 2025-05-14 14:47:00.099442 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-14 14:47:00.099449 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:33.026) 0:00:36.535 ********* 2025-05-14 14:47:00.099471 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-14 14:47:00.099480 | orchestrator | 2025-05-14 14:47:00.099487 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-14 14:47:00.099494 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.343) 0:00:36.878 ********* 2025-05-14 14:47:00.099503 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-14 14:47:00.099511 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-14 14:47:00.099519 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-14 14:47:00.099527 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-14 14:47:00.099534 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-14 14:47:00.099553 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-14 14:47:00.099561 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-14 14:47:00.099569 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-14 14:47:00.099577 | orchestrator | 2025-05-14 14:47:00.099584 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-14 14:47:00.099592 | orchestrator | Wednesday 14 May 2025 14:46:26 +0000 (0:00:02.385) 0:00:39.264 ********* 2025-05-14 14:47:00.099606 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:47:00.099614 | orchestrator | 2025-05-14 14:47:00.099622 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:47:00.099630 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:47:00.099638 | orchestrator | 2025-05-14 14:47:00.099646 | orchestrator | Wednesday 14 May 2025 14:46:26 +0000 (0:00:00.027) 0:00:39.291 ********* 2025-05-14 14:47:00.099653 | orchestrator | =============================================================================== 2025-05-14 14:47:00.099661 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.03s 2025-05-14 14:47:00.099669 | orchestrator | Check ceph keys --------------------------------------------------------- 2.97s 2025-05-14 14:47:00.099677 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.39s 2025-05-14 14:47:00.099684 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.34s 2025-05-14 14:47:00.099692 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-05-14 14:47:00.099699 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-05-14 14:47:00.099707 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-14 14:47:00.099716 | orchestrator | 2025-05-14 14:47:00.099724 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task d34c8c97-2a29-4531-9ba4-c601d5dad57b is in state SUCCESS 2025-05-14 14:47:00.099736 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:00.100743 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:00.101539 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:00.103682 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:00.104179 | orchestrator | 2025-05-14 14:47:00 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:00.104203 | orchestrator | 2025-05-14 14:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:03.144186 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:03.144516 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:03.145269 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:03.148422 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:03.149207 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:03.150207 | orchestrator | 2025-05-14 14:47:03 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:03.150311 | orchestrator | 2025-05-14 14:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:06.181690 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:06.181887 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:06.182408 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:06.186882 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:06.187369 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:06.188087 | orchestrator | 2025-05-14 14:47:06 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:06.188117 | orchestrator | 2025-05-14 14:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:09.218223 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:09.218323 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:09.218702 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:09.219485 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:09.220188 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:09.220530 | orchestrator | 2025-05-14 14:47:09 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:09.220611 | orchestrator | 2025-05-14 14:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:12.262920 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:12.263322 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:12.264066 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:12.266169 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:12.266226 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:12.266240 | orchestrator | 2025-05-14 14:47:12 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:12.266298 | orchestrator | 2025-05-14 14:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:15.318724 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:15.318839 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:15.318859 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:15.326313 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:15.326404 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:15.326416 | orchestrator | 2025-05-14 14:47:15 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:15.326427 | orchestrator | 2025-05-14 14:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:18.351474 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:18.351567 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:18.351653 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:18.356196 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:18.356983 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:18.357567 | orchestrator | 2025-05-14 14:47:18 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:18.357589 | orchestrator | 2025-05-14 14:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:21.383655 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:21.383746 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state STARTED 2025-05-14 14:47:21.384354 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:21.384749 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:21.386464 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:21.386933 | orchestrator | 2025-05-14 14:47:21 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:21.386957 | orchestrator | 2025-05-14 14:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:24.424394 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:24.424467 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:24.424476 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task c30e5252-0b93-4828-a21d-993e705dc9b1 is in state SUCCESS 2025-05-14 14:47:24.424484 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:24.424776 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:24.425284 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:24.425891 | orchestrator | 2025-05-14 14:47:24 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:24.425971 | orchestrator | 2025-05-14 14:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:27.469611 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:27.469751 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:27.469889 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:27.471209 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:27.471301 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:27.472500 | orchestrator | 2025-05-14 14:47:27 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:27.472577 | orchestrator | 2025-05-14 14:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:30.499245 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:30.499460 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:30.504319 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:30.504413 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:30.504437 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:30.504457 | orchestrator | 2025-05-14 14:47:30 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:30.504474 | orchestrator | 2025-05-14 14:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:33.540730 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:33.540818 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:33.541150 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:33.541580 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:33.542009 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:33.543421 | orchestrator | 2025-05-14 14:47:33 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:33.543446 | orchestrator | 2025-05-14 14:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:36.565450 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:36.565543 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:36.565955 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:36.566551 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:36.567106 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:36.567576 | orchestrator | 2025-05-14 14:47:36 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:36.567597 | orchestrator | 2025-05-14 14:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:39.592188 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:39.592313 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:39.596075 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:39.596097 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:39.596105 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:39.596145 | orchestrator | 2025-05-14 14:47:39 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:39.596153 | orchestrator | 2025-05-14 14:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:42.623931 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:42.624162 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:42.624592 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:42.625152 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:42.625653 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:42.626453 | orchestrator | 2025-05-14 14:47:42 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:42.626543 | orchestrator | 2025-05-14 14:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:45.651212 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:45.653202 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:45.653250 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:45.653262 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:45.653550 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:45.654132 | orchestrator | 2025-05-14 14:47:45 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:45.654154 | orchestrator | 2025-05-14 14:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:48.679833 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:48.680502 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:48.680559 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:48.680717 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:48.681257 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:48.681886 | orchestrator | 2025-05-14 14:47:48 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:48.682124 | orchestrator | 2025-05-14 14:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:51.713502 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:51.713593 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:51.714501 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:51.714853 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:51.715550 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:51.717645 | orchestrator | 2025-05-14 14:47:51 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:51.717703 | orchestrator | 2025-05-14 14:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:54.751317 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:54.751540 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:54.751976 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:54.754076 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:54.754500 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:54.755159 | orchestrator | 2025-05-14 14:47:54 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:54.755234 | orchestrator | 2025-05-14 14:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:47:57.796585 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:47:57.796743 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state STARTED 2025-05-14 14:47:57.797417 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:47:57.797919 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:47:57.798464 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:47:57.799079 | orchestrator | 2025-05-14 14:47:57 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:47:57.799099 | orchestrator | 2025-05-14 14:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:00.833865 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:00.835519 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task d239e158-0c72-4bf9-a28e-64a19eed3899 is in state SUCCESS 2025-05-14 14:48:00.836032 | orchestrator | 2025-05-14 14:48:00.836086 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-14 14:48:00.836101 | orchestrator | 2025-05-14 14:48:00.836113 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-14 14:48:00.836125 | orchestrator | Wednesday 14 May 2025 14:46:22 +0000 (0:00:00.339) 0:00:00.339 ********* 2025-05-14 14:48:00.836138 | orchestrator | changed: [localhost] 2025-05-14 14:48:00.836166 | orchestrator | 2025-05-14 14:48:00.836179 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-14 14:48:00.836191 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.742) 0:00:01.082 ********* 2025-05-14 14:48:00.836203 | orchestrator | changed: [localhost] 2025-05-14 14:48:00.836215 | orchestrator | 2025-05-14 14:48:00.836227 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-14 14:48:00.836240 | orchestrator | Wednesday 14 May 2025 14:46:52 +0000 (0:00:28.924) 0:00:30.006 ********* 2025-05-14 14:48:00.836252 | orchestrator | changed: [localhost] 2025-05-14 14:48:00.836264 | orchestrator | 2025-05-14 14:48:00.836276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:48:00.836287 | orchestrator | 2025-05-14 14:48:00.836300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:48:00.836326 | orchestrator | Wednesday 14 May 2025 14:46:56 +0000 (0:00:03.960) 0:00:33.967 ********* 2025-05-14 14:48:00.836339 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:48:00.836351 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:48:00.836363 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:48:00.836375 | orchestrator | 2025-05-14 14:48:00.836385 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:48:00.836397 | orchestrator | Wednesday 14 May 2025 14:46:56 +0000 (0:00:00.370) 0:00:34.337 ********* 2025-05-14 14:48:00.836407 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-14 14:48:00.836419 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-14 14:48:00.836430 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-14 14:48:00.836442 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-14 14:48:00.836475 | orchestrator | 2025-05-14 14:48:00.836487 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-14 14:48:00.836498 | orchestrator | skipping: no hosts matched 2025-05-14 14:48:00.836511 | orchestrator | 2025-05-14 14:48:00.836523 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:48:00.836536 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:00.836549 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:00.836562 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:00.836573 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:00.836585 | orchestrator | 2025-05-14 14:48:00.836596 | orchestrator | 2025-05-14 14:48:00.836607 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:48:00.836619 | orchestrator | Wednesday 14 May 2025 14:46:57 +0000 (0:00:00.421) 0:00:34.759 ********* 2025-05-14 14:48:00.836630 | orchestrator | =============================================================================== 2025-05-14 14:48:00.836640 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.92s 2025-05-14 14:48:00.836651 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.96s 2025-05-14 14:48:00.836662 | orchestrator | Ensure the destination directory exists --------------------------------- 0.74s 2025-05-14 14:48:00.836673 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-05-14 14:48:00.836683 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-14 14:48:00.836694 | orchestrator | 2025-05-14 14:48:00.836705 | orchestrator | 2025-05-14 14:48:00.836716 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-14 14:48:00.836727 | orchestrator | 2025-05-14 14:48:00.836734 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-14 14:48:00.836741 | orchestrator | Wednesday 14 May 2025 14:46:29 +0000 (0:00:00.145) 0:00:00.145 ********* 2025-05-14 14:48:00.836748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-14 14:48:00.836756 | orchestrator | 2025-05-14 14:48:00.836764 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-14 14:48:00.836775 | orchestrator | Wednesday 14 May 2025 14:46:29 +0000 (0:00:00.201) 0:00:00.347 ********* 2025-05-14 14:48:00.836783 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-14 14:48:00.836789 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-14 14:48:00.836797 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-14 14:48:00.836806 | orchestrator | 2025-05-14 14:48:00.836817 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-14 14:48:00.836828 | orchestrator | Wednesday 14 May 2025 14:46:30 +0000 (0:00:01.065) 0:00:01.413 ********* 2025-05-14 14:48:00.836839 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-14 14:48:00.836850 | orchestrator | 2025-05-14 14:48:00.836865 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-14 14:48:00.836877 | orchestrator | Wednesday 14 May 2025 14:46:31 +0000 (0:00:01.138) 0:00:02.551 ********* 2025-05-14 14:48:00.836899 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:00.836912 | orchestrator | 2025-05-14 14:48:00.836923 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-14 14:48:00.836934 | orchestrator | Wednesday 14 May 2025 14:46:32 +0000 (0:00:00.806) 0:00:03.358 ********* 2025-05-14 14:48:00.836958 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:00.836969 | orchestrator | 2025-05-14 14:48:00.836980 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-14 14:48:00.836993 | orchestrator | Wednesday 14 May 2025 14:46:33 +0000 (0:00:00.911) 0:00:04.269 ********* 2025-05-14 14:48:00.837007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-14 14:48:00.837018 | orchestrator | ok: [testbed-manager] 2025-05-14 14:48:00.837028 | orchestrator | 2025-05-14 14:48:00.837039 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-14 14:48:00.837068 | orchestrator | Wednesday 14 May 2025 14:47:13 +0000 (0:00:39.794) 0:00:44.064 ********* 2025-05-14 14:48:00.837080 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-14 14:48:00.837091 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-14 14:48:00.837102 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-14 14:48:00.837113 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-14 14:48:00.837132 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-14 14:48:00.837143 | orchestrator | 2025-05-14 14:48:00.837150 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-14 14:48:00.837156 | orchestrator | Wednesday 14 May 2025 14:47:16 +0000 (0:00:03.668) 0:00:47.732 ********* 2025-05-14 14:48:00.837162 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-14 14:48:00.837168 | orchestrator | 2025-05-14 14:48:00.837175 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-14 14:48:00.837181 | orchestrator | Wednesday 14 May 2025 14:47:17 +0000 (0:00:00.436) 0:00:48.168 ********* 2025-05-14 14:48:00.837187 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:48:00.837193 | orchestrator | 2025-05-14 14:48:00.837199 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-14 14:48:00.837206 | orchestrator | Wednesday 14 May 2025 14:47:17 +0000 (0:00:00.116) 0:00:48.285 ********* 2025-05-14 14:48:00.837212 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:48:00.837218 | orchestrator | 2025-05-14 14:48:00.837224 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-14 14:48:00.837230 | orchestrator | Wednesday 14 May 2025 14:47:17 +0000 (0:00:00.286) 0:00:48.572 ********* 2025-05-14 14:48:00.837237 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:00.837243 | orchestrator | 2025-05-14 14:48:00.837249 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-14 14:48:00.837255 | orchestrator | Wednesday 14 May 2025 14:47:18 +0000 (0:00:01.216) 0:00:49.788 ********* 2025-05-14 14:48:00.837261 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:00.837268 | orchestrator | 2025-05-14 14:48:00.837274 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-14 14:48:00.837280 | orchestrator | Wednesday 14 May 2025 14:47:19 +0000 (0:00:00.912) 0:00:50.701 ********* 2025-05-14 14:48:00.837286 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:00.837292 | orchestrator | 2025-05-14 14:48:00.837298 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-14 14:48:00.837304 | orchestrator | Wednesday 14 May 2025 14:47:20 +0000 (0:00:00.605) 0:00:51.306 ********* 2025-05-14 14:48:00.837311 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-14 14:48:00.837317 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-14 14:48:00.837323 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-14 14:48:00.837329 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-14 14:48:00.837335 | orchestrator | 2025-05-14 14:48:00.837345 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:48:00.837354 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 14:48:00.837361 | orchestrator | 2025-05-14 14:48:00.837367 | orchestrator | Wednesday 14 May 2025 14:47:21 +0000 (0:00:01.166) 0:00:52.472 ********* 2025-05-14 14:48:00.837379 | orchestrator | =============================================================================== 2025-05-14 14:48:00.837385 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.79s 2025-05-14 14:48:00.837391 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.67s 2025-05-14 14:48:00.837397 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.22s 2025-05-14 14:48:00.837404 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.17s 2025-05-14 14:48:00.837410 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.14s 2025-05-14 14:48:00.837416 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.07s 2025-05-14 14:48:00.837424 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.91s 2025-05-14 14:48:00.837434 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-05-14 14:48:00.837440 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2025-05-14 14:48:00.837446 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2025-05-14 14:48:00.837452 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-05-14 14:48:00.837459 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-05-14 14:48:00.837465 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-05-14 14:48:00.837471 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-05-14 14:48:00.837477 | orchestrator | 2025-05-14 14:48:00.839854 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:00.842769 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:00.847091 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:00.853368 | orchestrator | 2025-05-14 14:48:00 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:00.853412 | orchestrator | 2025-05-14 14:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:03.891499 | orchestrator | 2025-05-14 14:48:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:03.893253 | orchestrator | 2025-05-14 14:48:03 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:03.893613 | orchestrator | 2025-05-14 14:48:03 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:03.894516 | orchestrator | 2025-05-14 14:48:03 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:03.895745 | orchestrator | 2025-05-14 14:48:03 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:03.895771 | orchestrator | 2025-05-14 14:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:06.940647 | orchestrator | 2025-05-14 14:48:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:06.941332 | orchestrator | 2025-05-14 14:48:06 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:06.941871 | orchestrator | 2025-05-14 14:48:06 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:06.942598 | orchestrator | 2025-05-14 14:48:06 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:06.944179 | orchestrator | 2025-05-14 14:48:06 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:06.944204 | orchestrator | 2025-05-14 14:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:09.985038 | orchestrator | 2025-05-14 14:48:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:09.986766 | orchestrator | 2025-05-14 14:48:09 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:09.988326 | orchestrator | 2025-05-14 14:48:09 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:09.990204 | orchestrator | 2025-05-14 14:48:09 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:09.991606 | orchestrator | 2025-05-14 14:48:09 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:09.991630 | orchestrator | 2025-05-14 14:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:13.029524 | orchestrator | 2025-05-14 14:48:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:13.029646 | orchestrator | 2025-05-14 14:48:13 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:13.030108 | orchestrator | 2025-05-14 14:48:13 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:13.032324 | orchestrator | 2025-05-14 14:48:13 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:13.032363 | orchestrator | 2025-05-14 14:48:13 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:13.032382 | orchestrator | 2025-05-14 14:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:16.055452 | orchestrator | 2025-05-14 14:48:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:16.055525 | orchestrator | 2025-05-14 14:48:16 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:16.055535 | orchestrator | 2025-05-14 14:48:16 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:16.055543 | orchestrator | 2025-05-14 14:48:16 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:16.055768 | orchestrator | 2025-05-14 14:48:16 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:16.055785 | orchestrator | 2025-05-14 14:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:19.081015 | orchestrator | 2025-05-14 14:48:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:19.083280 | orchestrator | 2025-05-14 14:48:19 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:19.084151 | orchestrator | 2025-05-14 14:48:19 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:19.085130 | orchestrator | 2025-05-14 14:48:19 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:19.087034 | orchestrator | 2025-05-14 14:48:19 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:19.087056 | orchestrator | 2025-05-14 14:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:22.119240 | orchestrator | 2025-05-14 14:48:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:22.119409 | orchestrator | 2025-05-14 14:48:22 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:22.119926 | orchestrator | 2025-05-14 14:48:22 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:22.123662 | orchestrator | 2025-05-14 14:48:22 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:22.124243 | orchestrator | 2025-05-14 14:48:22 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state STARTED 2025-05-14 14:48:22.124267 | orchestrator | 2025-05-14 14:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:25.160972 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:25.161053 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:25.161548 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:25.165091 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 14:48:25.165116 | orchestrator | 2025-05-14 14:48:25.165122 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-14 14:48:25.165127 | orchestrator | 2025-05-14 14:48:25.165131 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-14 14:48:25.165136 | orchestrator | Wednesday 14 May 2025 14:47:24 +0000 (0:00:00.390) 0:00:00.390 ********* 2025-05-14 14:48:25.165140 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165145 | orchestrator | 2025-05-14 14:48:25.165149 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-14 14:48:25.165153 | orchestrator | Wednesday 14 May 2025 14:47:26 +0000 (0:00:01.389) 0:00:01.779 ********* 2025-05-14 14:48:25.165157 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165160 | orchestrator | 2025-05-14 14:48:25.165164 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-14 14:48:25.165168 | orchestrator | Wednesday 14 May 2025 14:47:27 +0000 (0:00:01.015) 0:00:02.794 ********* 2025-05-14 14:48:25.165172 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165176 | orchestrator | 2025-05-14 14:48:25.165179 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-14 14:48:25.165183 | orchestrator | Wednesday 14 May 2025 14:47:28 +0000 (0:00:00.940) 0:00:03.735 ********* 2025-05-14 14:48:25.165187 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165190 | orchestrator | 2025-05-14 14:48:25.165194 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-14 14:48:25.165198 | orchestrator | Wednesday 14 May 2025 14:47:29 +0000 (0:00:00.820) 0:00:04.555 ********* 2025-05-14 14:48:25.165202 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165206 | orchestrator | 2025-05-14 14:48:25.165210 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-14 14:48:25.165213 | orchestrator | Wednesday 14 May 2025 14:47:29 +0000 (0:00:00.798) 0:00:05.354 ********* 2025-05-14 14:48:25.165217 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165221 | orchestrator | 2025-05-14 14:48:25.165225 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-14 14:48:25.165228 | orchestrator | Wednesday 14 May 2025 14:47:30 +0000 (0:00:00.786) 0:00:06.140 ********* 2025-05-14 14:48:25.165232 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165236 | orchestrator | 2025-05-14 14:48:25.165240 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-14 14:48:25.165244 | orchestrator | Wednesday 14 May 2025 14:47:31 +0000 (0:00:01.296) 0:00:07.436 ********* 2025-05-14 14:48:25.165248 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165251 | orchestrator | 2025-05-14 14:48:25.165255 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-14 14:48:25.165259 | orchestrator | Wednesday 14 May 2025 14:47:33 +0000 (0:00:01.030) 0:00:08.467 ********* 2025-05-14 14:48:25.165262 | orchestrator | changed: [testbed-manager] 2025-05-14 14:48:25.165266 | orchestrator | 2025-05-14 14:48:25.165270 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-14 14:48:25.165274 | orchestrator | Wednesday 14 May 2025 14:47:53 +0000 (0:00:20.668) 0:00:29.135 ********* 2025-05-14 14:48:25.165293 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:48:25.165297 | orchestrator | 2025-05-14 14:48:25.165300 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 14:48:25.165304 | orchestrator | 2025-05-14 14:48:25.165308 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 14:48:25.165312 | orchestrator | Wednesday 14 May 2025 14:47:54 +0000 (0:00:00.534) 0:00:29.670 ********* 2025-05-14 14:48:25.165315 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.165319 | orchestrator | 2025-05-14 14:48:25.165323 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 14:48:25.165327 | orchestrator | 2025-05-14 14:48:25.165330 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 14:48:25.165334 | orchestrator | Wednesday 14 May 2025 14:47:56 +0000 (0:00:01.846) 0:00:31.516 ********* 2025-05-14 14:48:25.165338 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:25.165342 | orchestrator | 2025-05-14 14:48:25.165346 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 14:48:25.165350 | orchestrator | 2025-05-14 14:48:25.165353 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 14:48:25.165357 | orchestrator | Wednesday 14 May 2025 14:47:57 +0000 (0:00:01.630) 0:00:33.147 ********* 2025-05-14 14:48:25.165361 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:25.165365 | orchestrator | 2025-05-14 14:48:25.165368 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:48:25.165373 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 14:48:25.165387 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:25.165391 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:25.165395 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:48:25.165399 | orchestrator | 2025-05-14 14:48:25.165403 | orchestrator | 2025-05-14 14:48:25.165406 | orchestrator | 2025-05-14 14:48:25.165410 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:48:25.165414 | orchestrator | Wednesday 14 May 2025 14:47:59 +0000 (0:00:01.358) 0:00:34.505 ********* 2025-05-14 14:48:25.165418 | orchestrator | =============================================================================== 2025-05-14 14:48:25.165433 | orchestrator | Create admin user ------------------------------------------------------ 20.67s 2025-05-14 14:48:25.165444 | orchestrator | Restart ceph manager service -------------------------------------------- 4.84s 2025-05-14 14:48:25.165448 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.39s 2025-05-14 14:48:25.165452 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.30s 2025-05-14 14:48:25.165455 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2025-05-14 14:48:25.165459 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2025-05-14 14:48:25.165463 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.94s 2025-05-14 14:48:25.165467 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.82s 2025-05-14 14:48:25.165470 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.80s 2025-05-14 14:48:25.165474 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.79s 2025-05-14 14:48:25.165478 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.53s 2025-05-14 14:48:25.165482 | orchestrator | 2025-05-14 14:48:25.165485 | orchestrator | 2025-05-14 14:48:25.165489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:48:25.165497 | orchestrator | 2025-05-14 14:48:25.165501 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:48:25.165505 | orchestrator | Wednesday 14 May 2025 14:47:01 +0000 (0:00:00.542) 0:00:00.542 ********* 2025-05-14 14:48:25.165508 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:48:25.165513 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:48:25.165517 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:48:25.165520 | orchestrator | 2025-05-14 14:48:25.165524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:48:25.165528 | orchestrator | Wednesday 14 May 2025 14:47:02 +0000 (0:00:00.489) 0:00:01.032 ********* 2025-05-14 14:48:25.165532 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-14 14:48:25.165536 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-14 14:48:25.165539 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-14 14:48:25.165543 | orchestrator | 2025-05-14 14:48:25.165547 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-14 14:48:25.165551 | orchestrator | 2025-05-14 14:48:25.165554 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 14:48:25.165558 | orchestrator | Wednesday 14 May 2025 14:47:02 +0000 (0:00:00.340) 0:00:01.372 ********* 2025-05-14 14:48:25.165562 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:48:25.165566 | orchestrator | 2025-05-14 14:48:25.165570 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-14 14:48:25.165574 | orchestrator | Wednesday 14 May 2025 14:47:03 +0000 (0:00:00.889) 0:00:02.262 ********* 2025-05-14 14:48:25.165578 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-14 14:48:25.165581 | orchestrator | 2025-05-14 14:48:25.165585 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-14 14:48:25.165589 | orchestrator | Wednesday 14 May 2025 14:47:07 +0000 (0:00:03.782) 0:00:06.044 ********* 2025-05-14 14:48:25.165593 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-14 14:48:25.165596 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-14 14:48:25.165600 | orchestrator | 2025-05-14 14:48:25.165604 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-14 14:48:25.165608 | orchestrator | Wednesday 14 May 2025 14:47:14 +0000 (0:00:07.018) 0:00:13.063 ********* 2025-05-14 14:48:25.165612 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:48:25.165616 | orchestrator | 2025-05-14 14:48:25.165619 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-14 14:48:25.165623 | orchestrator | Wednesday 14 May 2025 14:47:18 +0000 (0:00:04.096) 0:00:17.159 ********* 2025-05-14 14:48:25.165627 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:48:25.165631 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-14 14:48:25.165635 | orchestrator | 2025-05-14 14:48:25.165638 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-14 14:48:25.165642 | orchestrator | Wednesday 14 May 2025 14:47:22 +0000 (0:00:04.408) 0:00:21.568 ********* 2025-05-14 14:48:25.165646 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:48:25.165650 | orchestrator | 2025-05-14 14:48:25.165654 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-14 14:48:25.165657 | orchestrator | Wednesday 14 May 2025 14:47:26 +0000 (0:00:03.444) 0:00:25.012 ********* 2025-05-14 14:48:25.165664 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-14 14:48:25.165668 | orchestrator | 2025-05-14 14:48:25.165671 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 14:48:25.165675 | orchestrator | Wednesday 14 May 2025 14:47:30 +0000 (0:00:04.668) 0:00:29.681 ********* 2025-05-14 14:48:25.165682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.165686 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:25.165690 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:25.165694 | orchestrator | 2025-05-14 14:48:25.165698 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-14 14:48:25.165703 | orchestrator | Wednesday 14 May 2025 14:47:31 +0000 (0:00:00.799) 0:00:30.480 ********* 2025-05-14 14:48:25.165713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165730 | orchestrator | 2025-05-14 14:48:25.165734 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-14 14:48:25.165738 | orchestrator | Wednesday 14 May 2025 14:47:33 +0000 (0:00:01.597) 0:00:32.077 ********* 2025-05-14 14:48:25.165742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.165746 | orchestrator | 2025-05-14 14:48:25.165751 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-14 14:48:25.165755 | orchestrator | Wednesday 14 May 2025 14:47:33 +0000 (0:00:00.287) 0:00:32.365 ********* 2025-05-14 14:48:25.165759 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.165763 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:25.165767 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:25.165774 | orchestrator | 2025-05-14 14:48:25.165778 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 14:48:25.165782 | orchestrator | Wednesday 14 May 2025 14:47:34 +0000 (0:00:00.723) 0:00:33.089 ********* 2025-05-14 14:48:25.165787 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:48:25.165791 | orchestrator | 2025-05-14 14:48:25.165795 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-14 14:48:25.165802 | orchestrator | Wednesday 14 May 2025 14:47:35 +0000 (0:00:01.606) 0:00:34.696 ********* 2025-05-14 14:48:25.165811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165825 | orchestrator | 2025-05-14 14:48:25.165829 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-14 14:48:25.165833 | orchestrator | Wednesday 14 May 2025 14:47:38 +0000 (0:00:02.693) 0:00:37.389 ********* 2025-05-14 14:48:25.165838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165845 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.165852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165859 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:25.165864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165868 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:25.165872 | orchestrator | 2025-05-14 14:48:25.165876 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-14 14:48:25.165880 | orchestrator | Wednesday 14 May 2025 14:47:39 +0000 (0:00:00.787) 0:00:38.177 ********* 2025-05-14 14:48:25.165885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165889 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.165894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165901 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:25.165908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.165913 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:25.165917 | orchestrator | 2025-05-14 14:48:25.165923 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-14 14:48:25.165928 | orchestrator | Wednesday 14 May 2025 14:47:40 +0000 (0:00:00.837) 0:00:39.014 ********* 2025-05-14 14:48:25.165932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165949 | orchestrator | 2025-05-14 14:48:25.165953 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-14 14:48:25.165957 | orchestrator | Wednesday 14 May 2025 14:47:42 +0000 (0:00:02.283) 0:00:41.299 ********* 2025-05-14 14:48:25.165964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.165988 | orchestrator | 2025-05-14 14:48:25.165992 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-14 14:48:25.165999 | orchestrator | Wednesday 14 May 2025 14:47:47 +0000 (0:00:04.880) 0:00:46.179 ********* 2025-05-14 14:48:25.166004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 14:48:25.166008 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 14:48:25.166012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 14:48:25.166046 | orchestrator | 2025-05-14 14:48:25.166050 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-14 14:48:25.166055 | orchestrator | Wednesday 14 May 2025 14:47:48 +0000 (0:00:01.676) 0:00:47.855 ********* 2025-05-14 14:48:25.166059 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.166063 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:25.166068 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:25.166072 | orchestrator | 2025-05-14 14:48:25.166117 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-14 14:48:25.166120 | orchestrator | Wednesday 14 May 2025 14:47:52 +0000 (0:00:03.061) 0:00:50.917 ********* 2025-05-14 14:48:25.166131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.166135 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:25.166145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.166149 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:25.166153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 14:48:25.166161 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:25.166164 | orchestrator | 2025-05-14 14:48:25.166168 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-14 14:48:25.166172 | orchestrator | Wednesday 14 May 2025 14:47:52 +0000 (0:00:00.659) 0:00:51.576 ********* 2025-05-14 14:48:25.166176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.166183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.166190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:25.166194 | orchestrator | 2025-05-14 14:48:25.166198 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-14 14:48:25.166202 | orchestrator | Wednesday 14 May 2025 14:47:54 +0000 (0:00:01.731) 0:00:53.307 ********* 2025-05-14 14:48:25.166206 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.166210 | orchestrator | 2025-05-14 14:48:25.166213 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-14 14:48:25.166217 | orchestrator | Wednesday 14 May 2025 14:47:57 +0000 (0:00:02.708) 0:00:56.015 ********* 2025-05-14 14:48:25.166221 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.166224 | orchestrator | 2025-05-14 14:48:25.166228 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-14 14:48:25.166232 | orchestrator | Wednesday 14 May 2025 14:47:59 +0000 (0:00:02.590) 0:00:58.605 ********* 2025-05-14 14:48:25.166241 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.166245 | orchestrator | 2025-05-14 14:48:25.166249 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 14:48:25.166253 | orchestrator | Wednesday 14 May 2025 14:48:12 +0000 (0:00:12.469) 0:01:11.075 ********* 2025-05-14 14:48:25.166256 | orchestrator | 2025-05-14 14:48:25.166260 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 14:48:25.166264 | orchestrator | Wednesday 14 May 2025 14:48:12 +0000 (0:00:00.046) 0:01:11.121 ********* 2025-05-14 14:48:25.166268 | orchestrator | 2025-05-14 14:48:25.166271 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 14:48:25.166275 | orchestrator | Wednesday 14 May 2025 14:48:12 +0000 (0:00:00.125) 0:01:11.247 ********* 2025-05-14 14:48:25.166279 | orchestrator | 2025-05-14 14:48:25.166282 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-14 14:48:25.166286 | orchestrator | Wednesday 14 May 2025 14:48:12 +0000 (0:00:00.044) 0:01:11.292 ********* 2025-05-14 14:48:25.166290 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:25.166294 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:25.166297 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:25.166301 | orchestrator | 2025-05-14 14:48:25.166305 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:48:25.166309 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:48:25.166313 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:48:25.166317 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 14:48:25.166320 | orchestrator | 2025-05-14 14:48:25.166324 | orchestrator | 2025-05-14 14:48:25.166328 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:48:25.166332 | orchestrator | Wednesday 14 May 2025 14:48:23 +0000 (0:00:10.957) 0:01:22.250 ********* 2025-05-14 14:48:25.166335 | orchestrator | =============================================================================== 2025-05-14 14:48:25.166339 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.47s 2025-05-14 14:48:25.166343 | orchestrator | placement : Restart placement-api container ---------------------------- 10.96s 2025-05-14 14:48:25.166346 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.02s 2025-05-14 14:48:25.166350 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.88s 2025-05-14 14:48:25.166354 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.67s 2025-05-14 14:48:25.166357 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.41s 2025-05-14 14:48:25.166361 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.10s 2025-05-14 14:48:25.166365 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.78s 2025-05-14 14:48:25.166368 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.44s 2025-05-14 14:48:25.166372 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 3.06s 2025-05-14 14:48:25.166376 | orchestrator | placement : Creating placement databases -------------------------------- 2.71s 2025-05-14 14:48:25.166380 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.69s 2025-05-14 14:48:25.166386 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.59s 2025-05-14 14:48:25.166390 | orchestrator | placement : Copying over config.json files for services ----------------- 2.28s 2025-05-14 14:48:25.166393 | orchestrator | placement : Check placement containers ---------------------------------- 1.73s 2025-05-14 14:48:25.166397 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.68s 2025-05-14 14:48:25.166404 | orchestrator | placement : include_tasks ----------------------------------------------- 1.61s 2025-05-14 14:48:25.166408 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.60s 2025-05-14 14:48:25.166412 | orchestrator | placement : include_tasks ----------------------------------------------- 0.89s 2025-05-14 14:48:25.166415 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.84s 2025-05-14 14:48:25.166419 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:25.166425 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task 4d07e0cf-26db-4849-9693-c4895f858b2c is in state SUCCESS 2025-05-14 14:48:25.166429 | orchestrator | 2025-05-14 14:48:25 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:25.166433 | orchestrator | 2025-05-14 14:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:28.191198 | orchestrator | 2025-05-14 14:48:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:28.191449 | orchestrator | 2025-05-14 14:48:28 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:28.191932 | orchestrator | 2025-05-14 14:48:28 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:28.192567 | orchestrator | 2025-05-14 14:48:28 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:28.193072 | orchestrator | 2025-05-14 14:48:28 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:28.193109 | orchestrator | 2025-05-14 14:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:31.215851 | orchestrator | 2025-05-14 14:48:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:31.216380 | orchestrator | 2025-05-14 14:48:31 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:31.216572 | orchestrator | 2025-05-14 14:48:31 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:31.223578 | orchestrator | 2025-05-14 14:48:31 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:31.226265 | orchestrator | 2025-05-14 14:48:31 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:31.226305 | orchestrator | 2025-05-14 14:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:34.252152 | orchestrator | 2025-05-14 14:48:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:34.254710 | orchestrator | 2025-05-14 14:48:34 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:34.257076 | orchestrator | 2025-05-14 14:48:34 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state STARTED 2025-05-14 14:48:34.259216 | orchestrator | 2025-05-14 14:48:34 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:34.261381 | orchestrator | 2025-05-14 14:48:34 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:34.261415 | orchestrator | 2025-05-14 14:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:37.303508 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:37.304182 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:37.304288 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task 94688d67-9316-49cb-8d59-8151ae4c9d03 is in state SUCCESS 2025-05-14 14:48:37.305455 | orchestrator | 2025-05-14 14:48:37.305484 | orchestrator | 2025-05-14 14:48:37.305530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:48:37.305542 | orchestrator | 2025-05-14 14:48:37.305554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:48:37.305565 | orchestrator | Wednesday 14 May 2025 14:46:22 +0000 (0:00:00.352) 0:00:00.352 ********* 2025-05-14 14:48:37.305588 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:48:37.305600 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:48:37.305611 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:48:37.305622 | orchestrator | 2025-05-14 14:48:37.305633 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:48:37.305670 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.590) 0:00:00.942 ********* 2025-05-14 14:48:37.305696 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-14 14:48:37.305707 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-14 14:48:37.305718 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-14 14:48:37.305767 | orchestrator | 2025-05-14 14:48:37.305778 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-14 14:48:37.305818 | orchestrator | 2025-05-14 14:48:37.305830 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 14:48:37.305841 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.435) 0:00:01.377 ********* 2025-05-14 14:48:37.305852 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:48:37.305863 | orchestrator | 2025-05-14 14:48:37.305874 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-14 14:48:37.305884 | orchestrator | Wednesday 14 May 2025 14:46:24 +0000 (0:00:00.795) 0:00:02.173 ********* 2025-05-14 14:48:37.305895 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-14 14:48:37.305906 | orchestrator | 2025-05-14 14:48:37.305953 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-14 14:48:37.305965 | orchestrator | Wednesday 14 May 2025 14:46:28 +0000 (0:00:03.797) 0:00:05.970 ********* 2025-05-14 14:48:37.305976 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-14 14:48:37.305987 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-14 14:48:37.306009 | orchestrator | 2025-05-14 14:48:37.306117 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-14 14:48:37.306132 | orchestrator | Wednesday 14 May 2025 14:46:34 +0000 (0:00:06.538) 0:00:12.508 ********* 2025-05-14 14:48:37.306144 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-14 14:48:37.306155 | orchestrator | 2025-05-14 14:48:37.306165 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-14 14:48:37.306176 | orchestrator | Wednesday 14 May 2025 14:46:38 +0000 (0:00:03.762) 0:00:16.271 ********* 2025-05-14 14:48:37.306186 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:48:37.306196 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-14 14:48:37.306207 | orchestrator | 2025-05-14 14:48:37.306217 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-14 14:48:37.306228 | orchestrator | Wednesday 14 May 2025 14:46:42 +0000 (0:00:03.997) 0:00:20.269 ********* 2025-05-14 14:48:37.306238 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:48:37.306249 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-14 14:48:37.306259 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-14 14:48:37.306269 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-14 14:48:37.306280 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-14 14:48:37.306290 | orchestrator | 2025-05-14 14:48:37.306301 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-14 14:48:37.306326 | orchestrator | Wednesday 14 May 2025 14:46:59 +0000 (0:00:16.512) 0:00:36.781 ********* 2025-05-14 14:48:37.306336 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-14 14:48:37.306347 | orchestrator | 2025-05-14 14:48:37.306357 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-14 14:48:37.306367 | orchestrator | Wednesday 14 May 2025 14:47:03 +0000 (0:00:04.433) 0:00:41.214 ********* 2025-05-14 14:48:37.306381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306537 | orchestrator | 2025-05-14 14:48:37.306548 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-14 14:48:37.306559 | orchestrator | Wednesday 14 May 2025 14:47:06 +0000 (0:00:02.573) 0:00:43.788 ********* 2025-05-14 14:48:37.306569 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-14 14:48:37.306580 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-14 14:48:37.306590 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-14 14:48:37.306606 | orchestrator | 2025-05-14 14:48:37.306617 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-14 14:48:37.306628 | orchestrator | Wednesday 14 May 2025 14:47:08 +0000 (0:00:02.187) 0:00:45.976 ********* 2025-05-14 14:48:37.306638 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.306649 | orchestrator | 2025-05-14 14:48:37.306659 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-14 14:48:37.306670 | orchestrator | Wednesday 14 May 2025 14:47:08 +0000 (0:00:00.107) 0:00:46.083 ********* 2025-05-14 14:48:37.306680 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.306691 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.306701 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.306711 | orchestrator | 2025-05-14 14:48:37.306722 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 14:48:37.306732 | orchestrator | Wednesday 14 May 2025 14:47:08 +0000 (0:00:00.373) 0:00:46.456 ********* 2025-05-14 14:48:37.306743 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:48:37.306753 | orchestrator | 2025-05-14 14:48:37.306764 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-14 14:48:37.306774 | orchestrator | Wednesday 14 May 2025 14:47:09 +0000 (0:00:00.494) 0:00:46.951 ********* 2025-05-14 14:48:37.306785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.306841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.306926 | orchestrator | 2025-05-14 14:48:37.306937 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-14 14:48:37.306948 | orchestrator | Wednesday 14 May 2025 14:47:13 +0000 (0:00:03.820) 0:00:50.771 ********* 2025-05-14 14:48:37.306959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.306971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.306989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307001 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.307017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307058 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.307069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307131 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.307142 | orchestrator | 2025-05-14 14:48:37.307153 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-14 14:48:37.307175 | orchestrator | Wednesday 14 May 2025 14:47:14 +0000 (0:00:01.389) 0:00:52.161 ********* 2025-05-14 14:48:37.307186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307221 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.307239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307285 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.307296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307331 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.307342 | orchestrator | 2025-05-14 14:48:37.307352 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-14 14:48:37.307369 | orchestrator | Wednesday 14 May 2025 14:47:16 +0000 (0:00:02.098) 0:00:54.260 ********* 2025-05-14 14:48:37.307385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307515 | orchestrator | 2025-05-14 14:48:37.307525 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-14 14:48:37.307536 | orchestrator | Wednesday 14 May 2025 14:47:21 +0000 (0:00:04.430) 0:00:58.690 ********* 2025-05-14 14:48:37.307547 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:37.307558 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.307569 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:37.307579 | orchestrator | 2025-05-14 14:48:37.307590 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-14 14:48:37.307600 | orchestrator | Wednesday 14 May 2025 14:47:24 +0000 (0:00:03.239) 0:01:01.930 ********* 2025-05-14 14:48:37.307611 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:48:37.307622 | orchestrator | 2025-05-14 14:48:37.307632 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-14 14:48:37.307643 | orchestrator | Wednesday 14 May 2025 14:47:26 +0000 (0:00:01.933) 0:01:03.863 ********* 2025-05-14 14:48:37.307653 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.307664 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.307675 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.307686 | orchestrator | 2025-05-14 14:48:37.307696 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-14 14:48:37.307713 | orchestrator | Wednesday 14 May 2025 14:47:27 +0000 (0:00:01.522) 0:01:05.385 ********* 2025-05-14 14:48:37.307736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.307773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.307861 | orchestrator | 2025-05-14 14:48:37.307872 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-14 14:48:37.307883 | orchestrator | Wednesday 14 May 2025 14:47:39 +0000 (0:00:11.970) 0:01:17.356 ********* 2025-05-14 14:48:37.307894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307946 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.307957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.307969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.307981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.308000 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.308018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 14:48:37.308034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.308046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:48:37.308057 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.308068 | orchestrator | 2025-05-14 14:48:37.308078 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-14 14:48:37.308105 | orchestrator | Wednesday 14 May 2025 14:47:41 +0000 (0:00:01.224) 0:01:18.580 ********* 2025-05-14 14:48:37.308117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.308129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.308162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 14:48:37.308175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:48:37.308256 | orchestrator | 2025-05-14 14:48:37.308267 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 14:48:37.308282 | orchestrator | Wednesday 14 May 2025 14:47:45 +0000 (0:00:04.389) 0:01:22.970 ********* 2025-05-14 14:48:37.308293 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:48:37.308303 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:48:37.308314 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:48:37.308325 | orchestrator | 2025-05-14 14:48:37.308335 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-14 14:48:37.308346 | orchestrator | Wednesday 14 May 2025 14:47:45 +0000 (0:00:00.504) 0:01:23.474 ********* 2025-05-14 14:48:37.308356 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308367 | orchestrator | 2025-05-14 14:48:37.308378 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-14 14:48:37.308388 | orchestrator | Wednesday 14 May 2025 14:47:48 +0000 (0:00:02.839) 0:01:26.314 ********* 2025-05-14 14:48:37.308399 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308409 | orchestrator | 2025-05-14 14:48:37.308420 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-14 14:48:37.308430 | orchestrator | Wednesday 14 May 2025 14:47:51 +0000 (0:00:02.545) 0:01:28.859 ********* 2025-05-14 14:48:37.308441 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308451 | orchestrator | 2025-05-14 14:48:37.308462 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 14:48:37.308473 | orchestrator | Wednesday 14 May 2025 14:48:02 +0000 (0:00:10.990) 0:01:39.850 ********* 2025-05-14 14:48:37.308483 | orchestrator | 2025-05-14 14:48:37.308494 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 14:48:37.308505 | orchestrator | Wednesday 14 May 2025 14:48:02 +0000 (0:00:00.091) 0:01:39.941 ********* 2025-05-14 14:48:37.308515 | orchestrator | 2025-05-14 14:48:37.308526 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 14:48:37.308537 | orchestrator | Wednesday 14 May 2025 14:48:02 +0000 (0:00:00.248) 0:01:40.189 ********* 2025-05-14 14:48:37.308553 | orchestrator | 2025-05-14 14:48:37.308564 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-14 14:48:37.308575 | orchestrator | Wednesday 14 May 2025 14:48:02 +0000 (0:00:00.085) 0:01:40.275 ********* 2025-05-14 14:48:37.308585 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308596 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:37.308606 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:37.308617 | orchestrator | 2025-05-14 14:48:37.308627 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-14 14:48:37.308638 | orchestrator | Wednesday 14 May 2025 14:48:14 +0000 (0:00:11.281) 0:01:51.556 ********* 2025-05-14 14:48:37.308649 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:37.308660 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308670 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:37.308681 | orchestrator | 2025-05-14 14:48:37.308692 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-14 14:48:37.308703 | orchestrator | Wednesday 14 May 2025 14:48:23 +0000 (0:00:09.858) 0:02:01.415 ********* 2025-05-14 14:48:37.308713 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:48:37.308724 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:48:37.308735 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:48:37.308745 | orchestrator | 2025-05-14 14:48:37.308756 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:48:37.308767 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:48:37.308778 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:48:37.308789 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:48:37.308799 | orchestrator | 2025-05-14 14:48:37.308810 | orchestrator | 2025-05-14 14:48:37.308821 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:48:37.308832 | orchestrator | Wednesday 14 May 2025 14:48:34 +0000 (0:00:10.699) 0:02:12.115 ********* 2025-05-14 14:48:37.308842 | orchestrator | =============================================================================== 2025-05-14 14:48:37.308853 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.51s 2025-05-14 14:48:37.308863 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.97s 2025-05-14 14:48:37.308874 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.28s 2025-05-14 14:48:37.308884 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.99s 2025-05-14 14:48:37.308895 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.70s 2025-05-14 14:48:37.308906 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.86s 2025-05-14 14:48:37.308916 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.54s 2025-05-14 14:48:37.308932 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.43s 2025-05-14 14:48:37.308943 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.43s 2025-05-14 14:48:37.308954 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.39s 2025-05-14 14:48:37.308964 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.00s 2025-05-14 14:48:37.308975 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.82s 2025-05-14 14:48:37.308986 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.80s 2025-05-14 14:48:37.308997 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.76s 2025-05-14 14:48:37.309012 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.24s 2025-05-14 14:48:37.309029 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.84s 2025-05-14 14:48:37.309039 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.57s 2025-05-14 14:48:37.309050 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.55s 2025-05-14 14:48:37.309061 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.19s 2025-05-14 14:48:37.309071 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.10s 2025-05-14 14:48:37.309082 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:37.309120 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task 5020fb86-6df1-4ab9-bc97-9d082a5c9bb9 is in state STARTED 2025-05-14 14:48:37.309132 | orchestrator | 2025-05-14 14:48:37 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:37.309143 | orchestrator | 2025-05-14 14:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:40.345236 | orchestrator | 2025-05-14 14:48:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:40.345365 | orchestrator | 2025-05-14 14:48:40 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:40.346173 | orchestrator | 2025-05-14 14:48:40 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:40.346450 | orchestrator | 2025-05-14 14:48:40 | INFO  | Task 5020fb86-6df1-4ab9-bc97-9d082a5c9bb9 is in state SUCCESS 2025-05-14 14:48:40.346896 | orchestrator | 2025-05-14 14:48:40 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:40.346930 | orchestrator | 2025-05-14 14:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:43.373603 | orchestrator | 2025-05-14 14:48:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:43.374216 | orchestrator | 2025-05-14 14:48:43 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:43.374825 | orchestrator | 2025-05-14 14:48:43 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:43.375152 | orchestrator | 2025-05-14 14:48:43 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:43.375871 | orchestrator | 2025-05-14 14:48:43 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:43.375905 | orchestrator | 2025-05-14 14:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:46.404500 | orchestrator | 2025-05-14 14:48:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:46.404632 | orchestrator | 2025-05-14 14:48:46 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:46.405067 | orchestrator | 2025-05-14 14:48:46 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:46.405485 | orchestrator | 2025-05-14 14:48:46 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:46.405992 | orchestrator | 2025-05-14 14:48:46 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:46.406052 | orchestrator | 2025-05-14 14:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:49.428967 | orchestrator | 2025-05-14 14:48:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:49.431030 | orchestrator | 2025-05-14 14:48:49 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:49.432151 | orchestrator | 2025-05-14 14:48:49 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:49.432194 | orchestrator | 2025-05-14 14:48:49 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:49.432386 | orchestrator | 2025-05-14 14:48:49 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:49.432496 | orchestrator | 2025-05-14 14:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:52.472378 | orchestrator | 2025-05-14 14:48:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:52.472596 | orchestrator | 2025-05-14 14:48:52 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:52.473046 | orchestrator | 2025-05-14 14:48:52 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:52.474713 | orchestrator | 2025-05-14 14:48:52 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:52.474930 | orchestrator | 2025-05-14 14:48:52 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:52.474953 | orchestrator | 2025-05-14 14:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:55.509212 | orchestrator | 2025-05-14 14:48:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:55.509412 | orchestrator | 2025-05-14 14:48:55 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:55.509920 | orchestrator | 2025-05-14 14:48:55 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:55.510515 | orchestrator | 2025-05-14 14:48:55 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:55.511318 | orchestrator | 2025-05-14 14:48:55 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:55.511361 | orchestrator | 2025-05-14 14:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:48:58.538296 | orchestrator | 2025-05-14 14:48:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:48:58.538457 | orchestrator | 2025-05-14 14:48:58 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:48:58.538847 | orchestrator | 2025-05-14 14:48:58 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:48:58.539329 | orchestrator | 2025-05-14 14:48:58 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:48:58.539843 | orchestrator | 2025-05-14 14:48:58 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:48:58.539866 | orchestrator | 2025-05-14 14:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:01.573361 | orchestrator | 2025-05-14 14:49:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:01.573596 | orchestrator | 2025-05-14 14:49:01 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:01.580477 | orchestrator | 2025-05-14 14:49:01 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:01.585836 | orchestrator | 2025-05-14 14:49:01 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:01.588923 | orchestrator | 2025-05-14 14:49:01 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:01.589239 | orchestrator | 2025-05-14 14:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:04.625935 | orchestrator | 2025-05-14 14:49:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:04.626183 | orchestrator | 2025-05-14 14:49:04 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:04.627398 | orchestrator | 2025-05-14 14:49:04 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:04.627930 | orchestrator | 2025-05-14 14:49:04 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:04.628486 | orchestrator | 2025-05-14 14:49:04 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:04.628495 | orchestrator | 2025-05-14 14:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:07.658559 | orchestrator | 2025-05-14 14:49:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:07.658652 | orchestrator | 2025-05-14 14:49:07 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:07.658666 | orchestrator | 2025-05-14 14:49:07 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:07.658678 | orchestrator | 2025-05-14 14:49:07 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:07.658690 | orchestrator | 2025-05-14 14:49:07 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:07.658700 | orchestrator | 2025-05-14 14:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:10.703669 | orchestrator | 2025-05-14 14:49:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:10.703766 | orchestrator | 2025-05-14 14:49:10 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:10.704403 | orchestrator | 2025-05-14 14:49:10 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:10.705126 | orchestrator | 2025-05-14 14:49:10 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:10.706342 | orchestrator | 2025-05-14 14:49:10 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:10.706377 | orchestrator | 2025-05-14 14:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:13.750319 | orchestrator | 2025-05-14 14:49:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:13.750407 | orchestrator | 2025-05-14 14:49:13 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:13.750423 | orchestrator | 2025-05-14 14:49:13 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:13.750436 | orchestrator | 2025-05-14 14:49:13 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:13.750447 | orchestrator | 2025-05-14 14:49:13 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:13.750459 | orchestrator | 2025-05-14 14:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:16.780929 | orchestrator | 2025-05-14 14:49:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:16.781048 | orchestrator | 2025-05-14 14:49:16 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:16.781663 | orchestrator | 2025-05-14 14:49:16 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:16.782344 | orchestrator | 2025-05-14 14:49:16 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:16.785132 | orchestrator | 2025-05-14 14:49:16 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:16.785252 | orchestrator | 2025-05-14 14:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:19.824877 | orchestrator | 2025-05-14 14:49:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:19.825433 | orchestrator | 2025-05-14 14:49:19 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:19.826127 | orchestrator | 2025-05-14 14:49:19 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:19.826838 | orchestrator | 2025-05-14 14:49:19 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:19.827752 | orchestrator | 2025-05-14 14:49:19 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:19.827796 | orchestrator | 2025-05-14 14:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:22.858738 | orchestrator | 2025-05-14 14:49:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:22.858833 | orchestrator | 2025-05-14 14:49:22 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:22.859085 | orchestrator | 2025-05-14 14:49:22 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:22.859579 | orchestrator | 2025-05-14 14:49:22 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:22.860044 | orchestrator | 2025-05-14 14:49:22 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:22.860070 | orchestrator | 2025-05-14 14:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:25.887295 | orchestrator | 2025-05-14 14:49:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:25.887851 | orchestrator | 2025-05-14 14:49:25 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:25.887937 | orchestrator | 2025-05-14 14:49:25 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:25.888560 | orchestrator | 2025-05-14 14:49:25 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:25.888797 | orchestrator | 2025-05-14 14:49:25 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:25.888824 | orchestrator | 2025-05-14 14:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:28.915992 | orchestrator | 2025-05-14 14:49:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:28.916152 | orchestrator | 2025-05-14 14:49:28 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:28.916538 | orchestrator | 2025-05-14 14:49:28 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state STARTED 2025-05-14 14:49:28.917007 | orchestrator | 2025-05-14 14:49:28 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:28.917544 | orchestrator | 2025-05-14 14:49:28 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:28.917567 | orchestrator | 2025-05-14 14:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:31.946306 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:31.946397 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:31.948433 | orchestrator | 2025-05-14 14:49:31.948511 | orchestrator | 2025-05-14 14:49:31.948526 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:49:31.948564 | orchestrator | 2025-05-14 14:49:31.948576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:49:31.948588 | orchestrator | Wednesday 14 May 2025 14:48:37 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-05-14 14:49:31.948599 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:49:31.948610 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:49:31.948621 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:49:31.948631 | orchestrator | 2025-05-14 14:49:31.948642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:49:31.948653 | orchestrator | Wednesday 14 May 2025 14:48:38 +0000 (0:00:00.384) 0:00:00.612 ********* 2025-05-14 14:49:31.948664 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 14:49:31.948674 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 14:49:31.948685 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 14:49:31.948696 | orchestrator | 2025-05-14 14:49:31.948706 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-14 14:49:31.948717 | orchestrator | 2025-05-14 14:49:31.948728 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-14 14:49:31.948739 | orchestrator | Wednesday 14 May 2025 14:48:38 +0000 (0:00:00.463) 0:00:01.076 ********* 2025-05-14 14:49:31.948778 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:49:31.948789 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:49:31.948799 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:49:31.948810 | orchestrator | 2025-05-14 14:49:31.948821 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:49:31.948833 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:49:31.948869 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:49:31.948881 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:49:31.948892 | orchestrator | 2025-05-14 14:49:31.948991 | orchestrator | 2025-05-14 14:49:31.949004 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:49:31.949015 | orchestrator | Wednesday 14 May 2025 14:48:39 +0000 (0:00:00.657) 0:00:01.734 ********* 2025-05-14 14:49:31.949026 | orchestrator | =============================================================================== 2025-05-14 14:49:31.949089 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2025-05-14 14:49:31.949102 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-14 14:49:31.949114 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-05-14 14:49:31.949125 | orchestrator | 2025-05-14 14:49:31.949138 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task c2f10781-cbe3-4fac-833d-506f3962335e is in state SUCCESS 2025-05-14 14:49:31.949737 | orchestrator | 2025-05-14 14:49:31.949764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:49:31.949775 | orchestrator | 2025-05-14 14:49:31.949786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:49:31.949797 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.310) 0:00:00.310 ********* 2025-05-14 14:49:31.949808 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:49:31.949819 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:49:31.949829 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:49:31.949840 | orchestrator | 2025-05-14 14:49:31.949850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:49:31.949861 | orchestrator | Wednesday 14 May 2025 14:46:24 +0000 (0:00:00.441) 0:00:00.751 ********* 2025-05-14 14:49:31.949872 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-14 14:49:31.949883 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-14 14:49:31.949894 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-14 14:49:31.949920 | orchestrator | 2025-05-14 14:49:31.949931 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-14 14:49:31.949941 | orchestrator | 2025-05-14 14:49:31.949952 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 14:49:31.949962 | orchestrator | Wednesday 14 May 2025 14:46:24 +0000 (0:00:00.392) 0:00:01.144 ********* 2025-05-14 14:49:31.949973 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:49:31.949984 | orchestrator | 2025-05-14 14:49:31.949995 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-14 14:49:31.950005 | orchestrator | Wednesday 14 May 2025 14:46:25 +0000 (0:00:00.677) 0:00:01.822 ********* 2025-05-14 14:49:31.950057 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-14 14:49:31.950069 | orchestrator | 2025-05-14 14:49:31.950091 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-14 14:49:31.950103 | orchestrator | Wednesday 14 May 2025 14:46:29 +0000 (0:00:04.059) 0:00:05.881 ********* 2025-05-14 14:49:31.950113 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-14 14:49:31.950124 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-14 14:49:31.950136 | orchestrator | 2025-05-14 14:49:31.950148 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-14 14:49:31.950161 | orchestrator | Wednesday 14 May 2025 14:46:36 +0000 (0:00:07.049) 0:00:12.930 ********* 2025-05-14 14:49:31.950194 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:49:31.950207 | orchestrator | 2025-05-14 14:49:31.950219 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-14 14:49:31.950231 | orchestrator | Wednesday 14 May 2025 14:46:39 +0000 (0:00:03.390) 0:00:16.321 ********* 2025-05-14 14:49:31.950243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:49:31.950255 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-14 14:49:31.950267 | orchestrator | 2025-05-14 14:49:31.950278 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-14 14:49:31.950290 | orchestrator | Wednesday 14 May 2025 14:46:43 +0000 (0:00:03.932) 0:00:20.253 ********* 2025-05-14 14:49:31.950302 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:49:31.950314 | orchestrator | 2025-05-14 14:49:31.950377 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-14 14:49:31.950391 | orchestrator | Wednesday 14 May 2025 14:46:46 +0000 (0:00:03.295) 0:00:23.549 ********* 2025-05-14 14:49:31.950403 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-14 14:49:31.950413 | orchestrator | 2025-05-14 14:49:31.950424 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-14 14:49:31.950435 | orchestrator | Wednesday 14 May 2025 14:46:51 +0000 (0:00:04.351) 0:00:27.901 ********* 2025-05-14 14:49:31.950449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.950488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.950507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.950519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.950745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.950757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.950780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.950792 | orchestrator | 2025-05-14 14:49:31.950804 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-14 14:49:31.950815 | orchestrator | Wednesday 14 May 2025 14:46:54 +0000 (0:00:03.482) 0:00:31.383 ********* 2025-05-14 14:49:31.950826 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.950836 | orchestrator | 2025-05-14 14:49:31.950847 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-14 14:49:31.950858 | orchestrator | Wednesday 14 May 2025 14:46:54 +0000 (0:00:00.117) 0:00:31.501 ********* 2025-05-14 14:49:31.950868 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.950879 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.950890 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.950901 | orchestrator | 2025-05-14 14:49:31.950911 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 14:49:31.950922 | orchestrator | Wednesday 14 May 2025 14:46:55 +0000 (0:00:00.397) 0:00:31.898 ********* 2025-05-14 14:49:31.950933 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:49:31.950944 | orchestrator | 2025-05-14 14:49:31.950955 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-14 14:49:31.950965 | orchestrator | Wednesday 14 May 2025 14:46:55 +0000 (0:00:00.676) 0:00:32.574 ********* 2025-05-14 14:49:31.950981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.950993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.951011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.951028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.951284 | orchestrator | 2025-05-14 14:49:31.951296 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-14 14:49:31.951306 | orchestrator | Wednesday 14 May 2025 14:47:02 +0000 (0:00:06.566) 0:00:39.141 ********* 2025-05-14 14:49:31.951322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.951334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.951352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951639 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.951657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.951669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.951688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.951718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.951746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951829 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.951840 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.951851 | orchestrator | 2025-05-14 14:49:31.951862 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-14 14:49:31.951873 | orchestrator | Wednesday 14 May 2025 14:47:03 +0000 (0:00:01.047) 0:00:40.189 ********* 2025-05-14 14:49:31.951888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.951910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.951922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.951963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.952019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.952041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952053 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.952064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952154 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.952217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.952239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.952252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952320 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.952333 | orchestrator | 2025-05-14 14:49:31.952344 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-14 14:49:31.952357 | orchestrator | Wednesday 14 May 2025 14:47:05 +0000 (0:00:02.198) 0:00:42.387 ********* 2025-05-14 14:49:31.952369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.952388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.952428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.952442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.952929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.952941 | orchestrator | 2025-05-14 14:49:31.952952 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-14 14:49:31.952963 | orchestrator | Wednesday 14 May 2025 14:47:11 +0000 (0:00:06.197) 0:00:48.585 ********* 2025-05-14 14:49:31.952974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.952986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.953005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.953024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953414 | orchestrator | 2025-05-14 14:49:31.953434 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-14 14:49:31.953445 | orchestrator | Wednesday 14 May 2025 14:47:40 +0000 (0:00:28.300) 0:01:16.885 ********* 2025-05-14 14:49:31.953456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 14:49:31.953467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 14:49:31.953478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 14:49:31.953489 | orchestrator | 2025-05-14 14:49:31.953500 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-14 14:49:31.953539 | orchestrator | Wednesday 14 May 2025 14:47:48 +0000 (0:00:08.056) 0:01:24.942 ********* 2025-05-14 14:49:31.953552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 14:49:31.953570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 14:49:31.953582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 14:49:31.953594 | orchestrator | 2025-05-14 14:49:31.953607 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-14 14:49:31.953619 | orchestrator | Wednesday 14 May 2025 14:47:52 +0000 (0:00:04.621) 0:01:29.563 ********* 2025-05-14 14:49:31.953631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.953649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.953663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.953683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.953939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.953950 | orchestrator | 2025-05-14 14:49:31.953961 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-14 14:49:31.953972 | orchestrator | Wednesday 14 May 2025 14:47:56 +0000 (0:00:04.115) 0:01:33.679 ********* 2025-05-14 14:49:31.953983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.954001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.954857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.954935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.954962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.954975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955282 | orchestrator | 2025-05-14 14:49:31.955294 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 14:49:31.955307 | orchestrator | Wednesday 14 May 2025 14:47:59 +0000 (0:00:02.815) 0:01:36.495 ********* 2025-05-14 14:49:31.955318 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.955330 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.955340 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.955351 | orchestrator | 2025-05-14 14:49:31.955362 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-14 14:49:31.955373 | orchestrator | Wednesday 14 May 2025 14:48:00 +0000 (0:00:00.349) 0:01:36.844 ********* 2025-05-14 14:49:31.955393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.955408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.955425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955502 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.955515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.955532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.955551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955621 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.955638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 14:49:31.955656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 14:49:31.955669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.955744 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.955756 | orchestrator | 2025-05-14 14:49:31.955766 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-14 14:49:31.955777 | orchestrator | Wednesday 14 May 2025 14:48:00 +0000 (0:00:00.817) 0:01:37.661 ********* 2025-05-14 14:49:31.955793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.955805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.955816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 14:49:31.955834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.955994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.956005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.956023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.956040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.956056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 14:49:31.956068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.956079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 14:49:31.956090 | orchestrator | 2025-05-14 14:49:31.956101 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 14:49:31.956112 | orchestrator | Wednesday 14 May 2025 14:48:06 +0000 (0:00:05.374) 0:01:43.036 ********* 2025-05-14 14:49:31.956123 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:49:31.956134 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:49:31.956145 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:49:31.956155 | orchestrator | 2025-05-14 14:49:31.956166 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-14 14:49:31.956212 | orchestrator | Wednesday 14 May 2025 14:48:06 +0000 (0:00:00.438) 0:01:43.474 ********* 2025-05-14 14:49:31.956225 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-14 14:49:31.956235 | orchestrator | 2025-05-14 14:49:31.956246 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-14 14:49:31.956257 | orchestrator | Wednesday 14 May 2025 14:48:09 +0000 (0:00:02.352) 0:01:45.827 ********* 2025-05-14 14:49:31.956267 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:49:31.956278 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-14 14:49:31.956289 | orchestrator | 2025-05-14 14:49:31.956299 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-14 14:49:31.956310 | orchestrator | Wednesday 14 May 2025 14:48:11 +0000 (0:00:02.608) 0:01:48.436 ********* 2025-05-14 14:49:31.956327 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956338 | orchestrator | 2025-05-14 14:49:31.956348 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 14:49:31.956359 | orchestrator | Wednesday 14 May 2025 14:48:26 +0000 (0:00:14.862) 0:02:03.299 ********* 2025-05-14 14:49:31.956370 | orchestrator | 2025-05-14 14:49:31.956380 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 14:49:31.956391 | orchestrator | Wednesday 14 May 2025 14:48:26 +0000 (0:00:00.151) 0:02:03.450 ********* 2025-05-14 14:49:31.956401 | orchestrator | 2025-05-14 14:49:31.956412 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 14:49:31.956429 | orchestrator | Wednesday 14 May 2025 14:48:26 +0000 (0:00:00.121) 0:02:03.572 ********* 2025-05-14 14:49:31.956440 | orchestrator | 2025-05-14 14:49:31.956451 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-14 14:49:31.956462 | orchestrator | Wednesday 14 May 2025 14:48:26 +0000 (0:00:00.133) 0:02:03.706 ********* 2025-05-14 14:49:31.956473 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956483 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956494 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956504 | orchestrator | 2025-05-14 14:49:31.956515 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-14 14:49:31.956526 | orchestrator | Wednesday 14 May 2025 14:48:39 +0000 (0:00:12.672) 0:02:16.379 ********* 2025-05-14 14:49:31.956536 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956547 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956558 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956569 | orchestrator | 2025-05-14 14:49:31.956579 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-14 14:49:31.956590 | orchestrator | Wednesday 14 May 2025 14:48:46 +0000 (0:00:06.524) 0:02:22.903 ********* 2025-05-14 14:49:31.956601 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956611 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956622 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956633 | orchestrator | 2025-05-14 14:49:31.956643 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-14 14:49:31.956654 | orchestrator | Wednesday 14 May 2025 14:48:52 +0000 (0:00:06.836) 0:02:29.739 ********* 2025-05-14 14:49:31.956664 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956675 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956686 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956696 | orchestrator | 2025-05-14 14:49:31.956707 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-14 14:49:31.956718 | orchestrator | Wednesday 14 May 2025 14:49:04 +0000 (0:00:11.606) 0:02:41.346 ********* 2025-05-14 14:49:31.956733 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956744 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956755 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956765 | orchestrator | 2025-05-14 14:49:31.956776 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-14 14:49:31.956787 | orchestrator | Wednesday 14 May 2025 14:49:14 +0000 (0:00:09.971) 0:02:51.318 ********* 2025-05-14 14:49:31.956797 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956808 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:49:31.956818 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:49:31.956829 | orchestrator | 2025-05-14 14:49:31.956839 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-14 14:49:31.956850 | orchestrator | Wednesday 14 May 2025 14:49:22 +0000 (0:00:07.902) 0:02:59.220 ********* 2025-05-14 14:49:31.956861 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:49:31.956872 | orchestrator | 2025-05-14 14:49:31.956882 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:49:31.956893 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:49:31.956913 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:49:31.956924 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 14:49:31.956935 | orchestrator | 2025-05-14 14:49:31.956946 | orchestrator | 2025-05-14 14:49:31.956956 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:49:31.956967 | orchestrator | Wednesday 14 May 2025 14:49:28 +0000 (0:00:06.378) 0:03:05.599 ********* 2025-05-14 14:49:31.956978 | orchestrator | =============================================================================== 2025-05-14 14:49:31.956989 | orchestrator | designate : Copying over designate.conf -------------------------------- 28.30s 2025-05-14 14:49:31.957000 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.86s 2025-05-14 14:49:31.957010 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.67s 2025-05-14 14:49:31.957021 | orchestrator | designate : Restart designate-producer container ----------------------- 11.61s 2025-05-14 14:49:31.957032 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.97s 2025-05-14 14:49:31.957043 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.06s 2025-05-14 14:49:31.957053 | orchestrator | designate : Restart designate-worker container -------------------------- 7.90s 2025-05-14 14:49:31.957064 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.05s 2025-05-14 14:49:31.957075 | orchestrator | designate : Restart designate-central container ------------------------- 6.84s 2025-05-14 14:49:31.957086 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.57s 2025-05-14 14:49:31.957097 | orchestrator | designate : Restart designate-api container ----------------------------- 6.52s 2025-05-14 14:49:31.957107 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.38s 2025-05-14 14:49:31.957118 | orchestrator | designate : Copying over config.json files for services ----------------- 6.20s 2025-05-14 14:49:31.957129 | orchestrator | designate : Check designate containers ---------------------------------- 5.37s 2025-05-14 14:49:31.957139 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.62s 2025-05-14 14:49:31.957150 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.35s 2025-05-14 14:49:31.957160 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.12s 2025-05-14 14:49:31.957240 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.06s 2025-05-14 14:49:31.957255 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.93s 2025-05-14 14:49:31.957266 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.48s 2025-05-14 14:49:31.957277 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:31.957288 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:31.957299 | orchestrator | 2025-05-14 14:49:31 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:31.957310 | orchestrator | 2025-05-14 14:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:34.981937 | orchestrator | 2025-05-14 14:49:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:34.982468 | orchestrator | 2025-05-14 14:49:34 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:34.982972 | orchestrator | 2025-05-14 14:49:34 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:34.983622 | orchestrator | 2025-05-14 14:49:34 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:34.984236 | orchestrator | 2025-05-14 14:49:34 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:34.984262 | orchestrator | 2025-05-14 14:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:38.034877 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:38.036303 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:38.037648 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task bb522fb7-62a0-4d4e-b3c0-1857a88fe22d is in state STARTED 2025-05-14 14:49:38.038980 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:38.040142 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:38.041288 | orchestrator | 2025-05-14 14:49:38 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:38.041313 | orchestrator | 2025-05-14 14:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:41.085445 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:41.085535 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:41.085550 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task bb522fb7-62a0-4d4e-b3c0-1857a88fe22d is in state STARTED 2025-05-14 14:49:41.085562 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:41.085842 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:41.087596 | orchestrator | 2025-05-14 14:49:41 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:41.087650 | orchestrator | 2025-05-14 14:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:44.139412 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:44.139494 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:44.141844 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task bb522fb7-62a0-4d4e-b3c0-1857a88fe22d is in state STARTED 2025-05-14 14:49:44.144927 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:44.146767 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:44.148612 | orchestrator | 2025-05-14 14:49:44 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:44.149051 | orchestrator | 2025-05-14 14:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:47.212274 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:47.212429 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:47.213021 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task bb522fb7-62a0-4d4e-b3c0-1857a88fe22d is in state SUCCESS 2025-05-14 14:49:47.214190 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:47.214738 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:47.215771 | orchestrator | 2025-05-14 14:49:47 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:47.215788 | orchestrator | 2025-05-14 14:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:50.260580 | orchestrator | 2025-05-14 14:49:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:50.262737 | orchestrator | 2025-05-14 14:49:50 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:50.263509 | orchestrator | 2025-05-14 14:49:50 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:50.264884 | orchestrator | 2025-05-14 14:49:50 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:50.267171 | orchestrator | 2025-05-14 14:49:50 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:50.267224 | orchestrator | 2025-05-14 14:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:53.324782 | orchestrator | 2025-05-14 14:49:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:53.324960 | orchestrator | 2025-05-14 14:49:53 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:53.325687 | orchestrator | 2025-05-14 14:49:53 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:53.326399 | orchestrator | 2025-05-14 14:49:53 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:53.328475 | orchestrator | 2025-05-14 14:49:53 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:53.328497 | orchestrator | 2025-05-14 14:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:56.360355 | orchestrator | 2025-05-14 14:49:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:56.360469 | orchestrator | 2025-05-14 14:49:56 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:56.360830 | orchestrator | 2025-05-14 14:49:56 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:56.361271 | orchestrator | 2025-05-14 14:49:56 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:56.361787 | orchestrator | 2025-05-14 14:49:56 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:56.361807 | orchestrator | 2025-05-14 14:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:49:59.395129 | orchestrator | 2025-05-14 14:49:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:49:59.395263 | orchestrator | 2025-05-14 14:49:59 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:49:59.395281 | orchestrator | 2025-05-14 14:49:59 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:49:59.398686 | orchestrator | 2025-05-14 14:49:59 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:49:59.398716 | orchestrator | 2025-05-14 14:49:59 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:49:59.398727 | orchestrator | 2025-05-14 14:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:02.435287 | orchestrator | 2025-05-14 14:50:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:02.437711 | orchestrator | 2025-05-14 14:50:02 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:02.441576 | orchestrator | 2025-05-14 14:50:02 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:02.442501 | orchestrator | 2025-05-14 14:50:02 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:50:02.442701 | orchestrator | 2025-05-14 14:50:02 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:02.442725 | orchestrator | 2025-05-14 14:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:05.482731 | orchestrator | 2025-05-14 14:50:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:05.483013 | orchestrator | 2025-05-14 14:50:05 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:05.484144 | orchestrator | 2025-05-14 14:50:05 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:05.485013 | orchestrator | 2025-05-14 14:50:05 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state STARTED 2025-05-14 14:50:05.486291 | orchestrator | 2025-05-14 14:50:05 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:05.486318 | orchestrator | 2025-05-14 14:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:08.541126 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:08.543728 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:08.545918 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:08.548404 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:08.549327 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task 2b7c6afb-796d-493c-a6b0-b47107b49901 is in state SUCCESS 2025-05-14 14:50:08.550446 | orchestrator | 2025-05-14 14:50:08 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:08.550659 | orchestrator | 2025-05-14 14:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:11.616368 | orchestrator | 2025-05-14 14:50:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:11.619962 | orchestrator | 2025-05-14 14:50:11 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:11.622339 | orchestrator | 2025-05-14 14:50:11 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:11.624139 | orchestrator | 2025-05-14 14:50:11 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:11.625642 | orchestrator | 2025-05-14 14:50:11 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:11.625757 | orchestrator | 2025-05-14 14:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:14.683195 | orchestrator | 2025-05-14 14:50:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:14.683348 | orchestrator | 2025-05-14 14:50:14 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:14.683363 | orchestrator | 2025-05-14 14:50:14 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:14.683373 | orchestrator | 2025-05-14 14:50:14 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:14.683454 | orchestrator | 2025-05-14 14:50:14 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:14.683496 | orchestrator | 2025-05-14 14:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:17.730420 | orchestrator | 2025-05-14 14:50:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:17.730535 | orchestrator | 2025-05-14 14:50:17 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:17.732341 | orchestrator | 2025-05-14 14:50:17 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:17.733745 | orchestrator | 2025-05-14 14:50:17 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:17.734569 | orchestrator | 2025-05-14 14:50:17 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:17.734605 | orchestrator | 2025-05-14 14:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:20.780635 | orchestrator | 2025-05-14 14:50:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:20.780902 | orchestrator | 2025-05-14 14:50:20 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:20.781575 | orchestrator | 2025-05-14 14:50:20 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:20.783513 | orchestrator | 2025-05-14 14:50:20 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:20.784032 | orchestrator | 2025-05-14 14:50:20 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:20.784055 | orchestrator | 2025-05-14 14:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:23.807163 | orchestrator | 2025-05-14 14:50:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:23.807478 | orchestrator | 2025-05-14 14:50:23 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:23.808546 | orchestrator | 2025-05-14 14:50:23 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:23.808812 | orchestrator | 2025-05-14 14:50:23 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:23.810647 | orchestrator | 2025-05-14 14:50:23 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:23.810671 | orchestrator | 2025-05-14 14:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:26.836865 | orchestrator | 2025-05-14 14:50:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:26.837023 | orchestrator | 2025-05-14 14:50:26 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:26.838641 | orchestrator | 2025-05-14 14:50:26 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:26.839139 | orchestrator | 2025-05-14 14:50:26 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:26.839741 | orchestrator | 2025-05-14 14:50:26 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:26.840015 | orchestrator | 2025-05-14 14:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:29.875488 | orchestrator | 2025-05-14 14:50:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:29.876988 | orchestrator | 2025-05-14 14:50:29 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:29.878456 | orchestrator | 2025-05-14 14:50:29 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:29.879995 | orchestrator | 2025-05-14 14:50:29 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:29.881067 | orchestrator | 2025-05-14 14:50:29 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:29.881318 | orchestrator | 2025-05-14 14:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:32.923403 | orchestrator | 2025-05-14 14:50:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:32.923531 | orchestrator | 2025-05-14 14:50:32 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:32.923612 | orchestrator | 2025-05-14 14:50:32 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:32.924276 | orchestrator | 2025-05-14 14:50:32 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:32.925743 | orchestrator | 2025-05-14 14:50:32 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state STARTED 2025-05-14 14:50:32.925812 | orchestrator | 2025-05-14 14:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:35.957162 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:35.957479 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:35.958137 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:35.958640 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:35.959190 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:35.959748 | orchestrator | 2025-05-14 14:50:35 | INFO  | Task 0ebf07d9-c6da-4f82-8b5d-33bb98505389 is in state SUCCESS 2025-05-14 14:50:35.961203 | orchestrator | 2025-05-14 14:50:35.961232 | orchestrator | None 2025-05-14 14:50:35.961269 | orchestrator | 2025-05-14 14:50:35.961281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:50:35.961291 | orchestrator | 2025-05-14 14:50:35.961301 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:50:35.961311 | orchestrator | Wednesday 14 May 2025 14:49:36 +0000 (0:00:00.355) 0:00:00.355 ********* 2025-05-14 14:50:35.961321 | orchestrator | ok: [testbed-manager] 2025-05-14 14:50:35.961331 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:50:35.961341 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:50:35.961350 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:50:35.961360 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:50:35.961369 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:50:35.961378 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:50:35.961387 | orchestrator | 2025-05-14 14:50:35.961397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:50:35.961407 | orchestrator | Wednesday 14 May 2025 14:49:37 +0000 (0:00:01.059) 0:00:01.415 ********* 2025-05-14 14:50:35.961417 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961427 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961437 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961446 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961455 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961465 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961603 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-14 14:50:35.961617 | orchestrator | 2025-05-14 14:50:35.961627 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 14:50:35.961636 | orchestrator | 2025-05-14 14:50:35.961667 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-14 14:50:35.961677 | orchestrator | Wednesday 14 May 2025 14:49:38 +0000 (0:00:00.713) 0:00:02.129 ********* 2025-05-14 14:50:35.961688 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:50:35.961698 | orchestrator | 2025-05-14 14:50:35.961708 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-14 14:50:35.961718 | orchestrator | Wednesday 14 May 2025 14:49:39 +0000 (0:00:01.246) 0:00:03.376 ********* 2025-05-14 14:50:35.961727 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-14 14:50:35.961736 | orchestrator | 2025-05-14 14:50:35.961758 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-14 14:50:35.961768 | orchestrator | Wednesday 14 May 2025 14:49:42 +0000 (0:00:03.091) 0:00:06.467 ********* 2025-05-14 14:50:35.961778 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-14 14:50:35.961789 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-14 14:50:35.961799 | orchestrator | 2025-05-14 14:50:35.961810 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-14 14:50:35.961820 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:06.234) 0:00:12.702 ********* 2025-05-14 14:50:35.961831 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-14 14:50:35.961841 | orchestrator | 2025-05-14 14:50:35.961852 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-14 14:50:35.961863 | orchestrator | Wednesday 14 May 2025 14:49:52 +0000 (0:00:03.825) 0:00:16.528 ********* 2025-05-14 14:50:35.961873 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:50:35.961884 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-14 14:50:35.961895 | orchestrator | 2025-05-14 14:50:35.961905 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-14 14:50:35.961916 | orchestrator | Wednesday 14 May 2025 14:49:56 +0000 (0:00:04.155) 0:00:20.683 ********* 2025-05-14 14:50:35.961927 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-14 14:50:35.961938 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-14 14:50:35.961948 | orchestrator | 2025-05-14 14:50:35.961958 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-14 14:50:35.961969 | orchestrator | Wednesday 14 May 2025 14:50:02 +0000 (0:00:05.548) 0:00:26.231 ********* 2025-05-14 14:50:35.961979 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-14 14:50:35.961990 | orchestrator | 2025-05-14 14:50:35.962000 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:50:35.962011 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962064 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962076 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962087 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962098 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962121 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962133 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:50:35.962150 | orchestrator | 2025-05-14 14:50:35.962161 | orchestrator | 2025-05-14 14:50:35.962171 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:50:35.962181 | orchestrator | Wednesday 14 May 2025 14:50:06 +0000 (0:00:04.585) 0:00:30.817 ********* 2025-05-14 14:50:35.962190 | orchestrator | =============================================================================== 2025-05-14 14:50:35.962200 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.23s 2025-05-14 14:50:35.962209 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.55s 2025-05-14 14:50:35.962219 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.59s 2025-05-14 14:50:35.962228 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.16s 2025-05-14 14:50:35.962237 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.83s 2025-05-14 14:50:35.962304 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.09s 2025-05-14 14:50:35.962324 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.25s 2025-05-14 14:50:35.962342 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.06s 2025-05-14 14:50:35.962361 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-05-14 14:50:35.962382 | orchestrator | 2025-05-14 14:50:35.962393 | orchestrator | 2025-05-14 14:50:35.962403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:50:35.962413 | orchestrator | 2025-05-14 14:50:35.962423 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:50:35.962432 | orchestrator | Wednesday 14 May 2025 14:48:30 +0000 (0:00:00.336) 0:00:00.336 ********* 2025-05-14 14:50:35.962442 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:50:35.962451 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:50:35.962461 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:50:35.962471 | orchestrator | 2025-05-14 14:50:35.962480 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:50:35.962490 | orchestrator | Wednesday 14 May 2025 14:48:30 +0000 (0:00:00.299) 0:00:00.635 ********* 2025-05-14 14:50:35.962499 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-14 14:50:35.962509 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-14 14:50:35.962525 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-14 14:50:35.962535 | orchestrator | 2025-05-14 14:50:35.962545 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-14 14:50:35.962554 | orchestrator | 2025-05-14 14:50:35.962564 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 14:50:35.962573 | orchestrator | Wednesday 14 May 2025 14:48:30 +0000 (0:00:00.207) 0:00:00.843 ********* 2025-05-14 14:50:35.962583 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:50:35.962593 | orchestrator | 2025-05-14 14:50:35.962603 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-14 14:50:35.962613 | orchestrator | Wednesday 14 May 2025 14:48:31 +0000 (0:00:00.489) 0:00:01.332 ********* 2025-05-14 14:50:35.962622 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-14 14:50:35.962632 | orchestrator | 2025-05-14 14:50:35.962641 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-14 14:50:35.962650 | orchestrator | Wednesday 14 May 2025 14:48:34 +0000 (0:00:03.707) 0:00:05.040 ********* 2025-05-14 14:50:35.962660 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-14 14:50:35.962670 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-14 14:50:35.962680 | orchestrator | 2025-05-14 14:50:35.962689 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-14 14:50:35.962707 | orchestrator | Wednesday 14 May 2025 14:48:41 +0000 (0:00:07.083) 0:00:12.123 ********* 2025-05-14 14:50:35.962717 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:50:35.962727 | orchestrator | 2025-05-14 14:50:35.962736 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-14 14:50:35.962746 | orchestrator | Wednesday 14 May 2025 14:48:45 +0000 (0:00:03.534) 0:00:15.658 ********* 2025-05-14 14:50:35.962755 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:50:35.962765 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-14 14:50:35.962774 | orchestrator | 2025-05-14 14:50:35.962784 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-14 14:50:35.962794 | orchestrator | Wednesday 14 May 2025 14:48:49 +0000 (0:00:04.171) 0:00:19.830 ********* 2025-05-14 14:50:35.962803 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:50:35.962812 | orchestrator | 2025-05-14 14:50:35.962822 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-14 14:50:35.962831 | orchestrator | Wednesday 14 May 2025 14:48:53 +0000 (0:00:03.719) 0:00:23.549 ********* 2025-05-14 14:50:35.962841 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-14 14:50:35.962851 | orchestrator | 2025-05-14 14:50:35.962860 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-14 14:50:35.962869 | orchestrator | Wednesday 14 May 2025 14:48:58 +0000 (0:00:04.841) 0:00:28.391 ********* 2025-05-14 14:50:35.962879 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.962888 | orchestrator | 2025-05-14 14:50:35.962898 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-14 14:50:35.962915 | orchestrator | Wednesday 14 May 2025 14:49:01 +0000 (0:00:03.661) 0:00:32.053 ********* 2025-05-14 14:50:35.962925 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.962935 | orchestrator | 2025-05-14 14:50:35.962944 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-14 14:50:35.962954 | orchestrator | Wednesday 14 May 2025 14:49:06 +0000 (0:00:04.506) 0:00:36.559 ********* 2025-05-14 14:50:35.962964 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.962973 | orchestrator | 2025-05-14 14:50:35.962983 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-14 14:50:35.962992 | orchestrator | Wednesday 14 May 2025 14:49:10 +0000 (0:00:03.717) 0:00:40.277 ********* 2025-05-14 14:50:35.963005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963094 | orchestrator | 2025-05-14 14:50:35.963104 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-14 14:50:35.963114 | orchestrator | Wednesday 14 May 2025 14:49:12 +0000 (0:00:02.208) 0:00:42.485 ********* 2025-05-14 14:50:35.963123 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963133 | orchestrator | 2025-05-14 14:50:35.963142 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-14 14:50:35.963158 | orchestrator | Wednesday 14 May 2025 14:49:12 +0000 (0:00:00.099) 0:00:42.585 ********* 2025-05-14 14:50:35.963168 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963177 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.963187 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.963196 | orchestrator | 2025-05-14 14:50:35.963210 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-14 14:50:35.963220 | orchestrator | Wednesday 14 May 2025 14:49:12 +0000 (0:00:00.327) 0:00:42.912 ********* 2025-05-14 14:50:35.963229 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:50:35.963259 | orchestrator | 2025-05-14 14:50:35.963272 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-14 14:50:35.963281 | orchestrator | Wednesday 14 May 2025 14:49:13 +0000 (0:00:00.458) 0:00:43.370 ********* 2025-05-14 14:50:35.963292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963314 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963360 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.963374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963395 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.963404 | orchestrator | 2025-05-14 14:50:35.963414 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-14 14:50:35.963424 | orchestrator | Wednesday 14 May 2025 14:49:13 +0000 (0:00:00.774) 0:00:44.145 ********* 2025-05-14 14:50:35.963433 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963442 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.963452 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.963461 | orchestrator | 2025-05-14 14:50:35.963471 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 14:50:35.963480 | orchestrator | Wednesday 14 May 2025 14:49:14 +0000 (0:00:00.226) 0:00:44.371 ********* 2025-05-14 14:50:35.963490 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:50:35.963500 | orchestrator | 2025-05-14 14:50:35.963509 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-14 14:50:35.963519 | orchestrator | Wednesday 14 May 2025 14:49:14 +0000 (0:00:00.613) 0:00:44.985 ********* 2025-05-14 14:50:35.963536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.963579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.963628 | orchestrator | 2025-05-14 14:50:35.963638 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-14 14:50:35.963647 | orchestrator | Wednesday 14 May 2025 14:49:17 +0000 (0:00:03.179) 0:00:48.164 ********* 2025-05-14 14:50:35.963658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963737 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.963753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963796 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.963806 | orchestrator | 2025-05-14 14:50:35.963815 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-14 14:50:35.963825 | orchestrator | Wednesday 14 May 2025 14:49:20 +0000 (0:00:02.949) 0:00:51.114 ********* 2025-05-14 14:50:35.963835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963856 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.963873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone2025-05-14 14:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:35.963890 | orchestrator | :/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963911 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.963925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.963936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.963946 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.963956 | orchestrator | 2025-05-14 14:50:35.963966 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-14 14:50:35.963975 | orchestrator | Wednesday 14 May 2025 14:49:22 +0000 (0:00:02.025) 0:00:53.140 ********* 2025-05-14 14:50:35.963991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964077 | orchestrator | 2025-05-14 14:50:35.964087 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-14 14:50:35.964097 | orchestrator | Wednesday 14 May 2025 14:49:26 +0000 (0:00:03.278) 0:00:56.419 ********* 2025-05-14 14:50:35.964107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964186 | orchestrator | 2025-05-14 14:50:35.964196 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-14 14:50:35.964205 | orchestrator | Wednesday 14 May 2025 14:49:35 +0000 (0:00:09.148) 0:01:05.568 ********* 2025-05-14 14:50:35.964219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.964230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.964258 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.964278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.964316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.964331 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.964349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 14:50:35.964372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:50:35.964383 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.964392 | orchestrator | 2025-05-14 14:50:35.964402 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-14 14:50:35.964411 | orchestrator | Wednesday 14 May 2025 14:49:36 +0000 (0:00:00.826) 0:01:06.394 ********* 2025-05-14 14:50:35.964421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 14:50:35.964473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:50:35.964509 | orchestrator | 2025-05-14 14:50:35.964519 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 14:50:35.964529 | orchestrator | Wednesday 14 May 2025 14:49:38 +0000 (0:00:02.544) 0:01:08.939 ********* 2025-05-14 14:50:35.964539 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:50:35.964548 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:50:35.964558 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:50:35.964567 | orchestrator | 2025-05-14 14:50:35.964577 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-14 14:50:35.964587 | orchestrator | Wednesday 14 May 2025 14:49:38 +0000 (0:00:00.268) 0:01:09.207 ********* 2025-05-14 14:50:35.964596 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.964605 | orchestrator | 2025-05-14 14:50:35.964615 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-14 14:50:35.964625 | orchestrator | Wednesday 14 May 2025 14:49:41 +0000 (0:00:02.486) 0:01:11.694 ********* 2025-05-14 14:50:35.964634 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.964643 | orchestrator | 2025-05-14 14:50:35.964653 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-14 14:50:35.964668 | orchestrator | Wednesday 14 May 2025 14:49:43 +0000 (0:00:02.505) 0:01:14.199 ********* 2025-05-14 14:50:35.964678 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.964687 | orchestrator | 2025-05-14 14:50:35.964697 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 14:50:35.964706 | orchestrator | Wednesday 14 May 2025 14:49:59 +0000 (0:00:15.236) 0:01:29.436 ********* 2025-05-14 14:50:35.964716 | orchestrator | 2025-05-14 14:50:35.964726 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 14:50:35.964735 | orchestrator | Wednesday 14 May 2025 14:49:59 +0000 (0:00:00.160) 0:01:29.597 ********* 2025-05-14 14:50:35.964747 | orchestrator | 2025-05-14 14:50:35.964765 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 14:50:35.964781 | orchestrator | Wednesday 14 May 2025 14:49:59 +0000 (0:00:00.390) 0:01:29.987 ********* 2025-05-14 14:50:35.964791 | orchestrator | 2025-05-14 14:50:35.964801 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-14 14:50:35.964810 | orchestrator | Wednesday 14 May 2025 14:49:59 +0000 (0:00:00.067) 0:01:30.055 ********* 2025-05-14 14:50:35.964820 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.964829 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:50:35.964839 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:50:35.964848 | orchestrator | 2025-05-14 14:50:35.964857 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-14 14:50:35.964867 | orchestrator | Wednesday 14 May 2025 14:50:18 +0000 (0:00:18.185) 0:01:48.241 ********* 2025-05-14 14:50:35.964877 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:50:35.964886 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:50:35.964895 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:50:35.964905 | orchestrator | 2025-05-14 14:50:35.964914 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:50:35.964924 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 14:50:35.964934 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:50:35.964951 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:50:35.964961 | orchestrator | 2025-05-14 14:50:35.964970 | orchestrator | 2025-05-14 14:50:35.964980 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:50:35.964994 | orchestrator | Wednesday 14 May 2025 14:50:34 +0000 (0:00:16.052) 0:02:04.293 ********* 2025-05-14 14:50:35.965004 | orchestrator | =============================================================================== 2025-05-14 14:50:35.965014 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.19s 2025-05-14 14:50:35.965023 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.05s 2025-05-14 14:50:35.965032 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.24s 2025-05-14 14:50:35.965045 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.15s 2025-05-14 14:50:35.965064 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.08s 2025-05-14 14:50:35.965078 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.84s 2025-05-14 14:50:35.965088 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.51s 2025-05-14 14:50:35.965098 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.17s 2025-05-14 14:50:35.965107 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.72s 2025-05-14 14:50:35.965117 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.72s 2025-05-14 14:50:35.965126 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.71s 2025-05-14 14:50:35.965135 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.66s 2025-05-14 14:50:35.965145 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.53s 2025-05-14 14:50:35.965154 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.28s 2025-05-14 14:50:35.965163 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.18s 2025-05-14 14:50:35.965173 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.95s 2025-05-14 14:50:35.965182 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.54s 2025-05-14 14:50:35.965192 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2025-05-14 14:50:35.965206 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.49s 2025-05-14 14:50:35.965223 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.21s 2025-05-14 14:50:38.984319 | orchestrator | 2025-05-14 14:50:38 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:38.984425 | orchestrator | 2025-05-14 14:50:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:38.984593 | orchestrator | 2025-05-14 14:50:38 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:38.986133 | orchestrator | 2025-05-14 14:50:38 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:38.986710 | orchestrator | 2025-05-14 14:50:38 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:38.986735 | orchestrator | 2025-05-14 14:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:42.011226 | orchestrator | 2025-05-14 14:50:42 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:42.011479 | orchestrator | 2025-05-14 14:50:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:42.012042 | orchestrator | 2025-05-14 14:50:42 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:42.012670 | orchestrator | 2025-05-14 14:50:42 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:42.013313 | orchestrator | 2025-05-14 14:50:42 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:42.013409 | orchestrator | 2025-05-14 14:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:45.047919 | orchestrator | 2025-05-14 14:50:45 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:45.048422 | orchestrator | 2025-05-14 14:50:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:45.049012 | orchestrator | 2025-05-14 14:50:45 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:45.049755 | orchestrator | 2025-05-14 14:50:45 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:45.050738 | orchestrator | 2025-05-14 14:50:45 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:45.050766 | orchestrator | 2025-05-14 14:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:48.081768 | orchestrator | 2025-05-14 14:50:48 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:48.081999 | orchestrator | 2025-05-14 14:50:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:48.082482 | orchestrator | 2025-05-14 14:50:48 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:48.082962 | orchestrator | 2025-05-14 14:50:48 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:48.083561 | orchestrator | 2025-05-14 14:50:48 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:48.083966 | orchestrator | 2025-05-14 14:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:51.146962 | orchestrator | 2025-05-14 14:50:51 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:51.149184 | orchestrator | 2025-05-14 14:50:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:51.151849 | orchestrator | 2025-05-14 14:50:51 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:51.154065 | orchestrator | 2025-05-14 14:50:51 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:51.155645 | orchestrator | 2025-05-14 14:50:51 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:51.155668 | orchestrator | 2025-05-14 14:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:54.184625 | orchestrator | 2025-05-14 14:50:54 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:54.186807 | orchestrator | 2025-05-14 14:50:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:54.190609 | orchestrator | 2025-05-14 14:50:54 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:54.191757 | orchestrator | 2025-05-14 14:50:54 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:54.193309 | orchestrator | 2025-05-14 14:50:54 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:54.193339 | orchestrator | 2025-05-14 14:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:50:57.253885 | orchestrator | 2025-05-14 14:50:57 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:50:57.255452 | orchestrator | 2025-05-14 14:50:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:50:57.255487 | orchestrator | 2025-05-14 14:50:57 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:50:57.255746 | orchestrator | 2025-05-14 14:50:57 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:50:57.257116 | orchestrator | 2025-05-14 14:50:57 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:50:57.257142 | orchestrator | 2025-05-14 14:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:00.300597 | orchestrator | 2025-05-14 14:51:00 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:00.300675 | orchestrator | 2025-05-14 14:51:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:00.302050 | orchestrator | 2025-05-14 14:51:00 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:00.302070 | orchestrator | 2025-05-14 14:51:00 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:51:00.302615 | orchestrator | 2025-05-14 14:51:00 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:00.302715 | orchestrator | 2025-05-14 14:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:03.340757 | orchestrator | 2025-05-14 14:51:03 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:03.340847 | orchestrator | 2025-05-14 14:51:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:03.342728 | orchestrator | 2025-05-14 14:51:03 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:03.343150 | orchestrator | 2025-05-14 14:51:03 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:51:03.344744 | orchestrator | 2025-05-14 14:51:03 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:03.344993 | orchestrator | 2025-05-14 14:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:06.382960 | orchestrator | 2025-05-14 14:51:06 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:06.383461 | orchestrator | 2025-05-14 14:51:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:06.385364 | orchestrator | 2025-05-14 14:51:06 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:06.388665 | orchestrator | 2025-05-14 14:51:06 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state STARTED 2025-05-14 14:51:06.388696 | orchestrator | 2025-05-14 14:51:06 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:06.388709 | orchestrator | 2025-05-14 14:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:09.434202 | orchestrator | 2025-05-14 14:51:09 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:09.435500 | orchestrator | 2025-05-14 14:51:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:09.436924 | orchestrator | 2025-05-14 14:51:09 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:09.446346 | orchestrator | 2025-05-14 14:51:09.446432 | orchestrator | 2025-05-14 14:51:09 | INFO  | Task 7f2a328f-3a0d-4da9-88ee-938d4c6e99c4 is in state SUCCESS 2025-05-14 14:51:09.448090 | orchestrator | 2025-05-14 14:51:09.448162 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:51:09.448203 | orchestrator | 2025-05-14 14:51:09.448215 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:51:09.448227 | orchestrator | Wednesday 14 May 2025 14:46:23 +0000 (0:00:00.609) 0:00:00.609 ********* 2025-05-14 14:51:09.448238 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:51:09.448250 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:51:09.448260 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:51:09.448319 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:51:09.448331 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:51:09.448342 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:51:09.448352 | orchestrator | 2025-05-14 14:51:09.448363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:51:09.448374 | orchestrator | Wednesday 14 May 2025 14:46:24 +0000 (0:00:00.881) 0:00:01.490 ********* 2025-05-14 14:51:09.448385 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-14 14:51:09.448396 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-14 14:51:09.448407 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-14 14:51:09.448417 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-14 14:51:09.448428 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-14 14:51:09.448439 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-14 14:51:09.448449 | orchestrator | 2025-05-14 14:51:09.448460 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-14 14:51:09.448471 | orchestrator | 2025-05-14 14:51:09.448546 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 14:51:09.448557 | orchestrator | Wednesday 14 May 2025 14:46:24 +0000 (0:00:00.756) 0:00:02.247 ********* 2025-05-14 14:51:09.448569 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:51:09.448581 | orchestrator | 2025-05-14 14:51:09.448592 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-14 14:51:09.448603 | orchestrator | Wednesday 14 May 2025 14:46:25 +0000 (0:00:00.959) 0:00:03.206 ********* 2025-05-14 14:51:09.448614 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:51:09.448625 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:51:09.448635 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:51:09.448647 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:51:09.448659 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:51:09.448671 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:51:09.448682 | orchestrator | 2025-05-14 14:51:09.448694 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-14 14:51:09.448706 | orchestrator | Wednesday 14 May 2025 14:46:26 +0000 (0:00:01.122) 0:00:04.328 ********* 2025-05-14 14:51:09.448719 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:51:09.448731 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:51:09.448744 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:51:09.448756 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:51:09.448768 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:51:09.448780 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:51:09.448792 | orchestrator | 2025-05-14 14:51:09.448805 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-14 14:51:09.448817 | orchestrator | Wednesday 14 May 2025 14:46:27 +0000 (0:00:01.032) 0:00:05.361 ********* 2025-05-14 14:51:09.448829 | orchestrator | ok: [testbed-node-0] => { 2025-05-14 14:51:09.448840 | orchestrator |  "changed": false, 2025-05-14 14:51:09.448851 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.448862 | orchestrator | } 2025-05-14 14:51:09.448873 | orchestrator | ok: [testbed-node-1] => { 2025-05-14 14:51:09.448883 | orchestrator |  "changed": false, 2025-05-14 14:51:09.448894 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.448905 | orchestrator | } 2025-05-14 14:51:09.448915 | orchestrator | ok: [testbed-node-2] => { 2025-05-14 14:51:09.448926 | orchestrator |  "changed": false, 2025-05-14 14:51:09.448946 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.448957 | orchestrator | } 2025-05-14 14:51:09.448967 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 14:51:09.448978 | orchestrator |  "changed": false, 2025-05-14 14:51:09.448989 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.449000 | orchestrator | } 2025-05-14 14:51:09.449010 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 14:51:09.449021 | orchestrator |  "changed": false, 2025-05-14 14:51:09.449031 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.449042 | orchestrator | } 2025-05-14 14:51:09.449053 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 14:51:09.449063 | orchestrator |  "changed": false, 2025-05-14 14:51:09.449074 | orchestrator |  "msg": "All assertions passed" 2025-05-14 14:51:09.449084 | orchestrator | } 2025-05-14 14:51:09.449095 | orchestrator | 2025-05-14 14:51:09.449106 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-14 14:51:09.449130 | orchestrator | Wednesday 14 May 2025 14:46:28 +0000 (0:00:00.614) 0:00:05.976 ********* 2025-05-14 14:51:09.449142 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.449152 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.449163 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.449174 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.449185 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.449195 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.449206 | orchestrator | 2025-05-14 14:51:09.449217 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-14 14:51:09.449228 | orchestrator | Wednesday 14 May 2025 14:46:29 +0000 (0:00:00.653) 0:00:06.630 ********* 2025-05-14 14:51:09.449239 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-14 14:51:09.449250 | orchestrator | 2025-05-14 14:51:09.449260 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-14 14:51:09.449300 | orchestrator | Wednesday 14 May 2025 14:46:32 +0000 (0:00:03.484) 0:00:10.114 ********* 2025-05-14 14:51:09.449320 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-14 14:51:09.449341 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-14 14:51:09.449358 | orchestrator | 2025-05-14 14:51:09.449389 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-14 14:51:09.449400 | orchestrator | Wednesday 14 May 2025 14:46:39 +0000 (0:00:06.532) 0:00:16.646 ********* 2025-05-14 14:51:09.449411 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:51:09.449422 | orchestrator | 2025-05-14 14:51:09.449432 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-14 14:51:09.449443 | orchestrator | Wednesday 14 May 2025 14:46:42 +0000 (0:00:03.558) 0:00:20.205 ********* 2025-05-14 14:51:09.449454 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:51:09.449465 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-14 14:51:09.449475 | orchestrator | 2025-05-14 14:51:09.449486 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-14 14:51:09.449497 | orchestrator | Wednesday 14 May 2025 14:46:46 +0000 (0:00:03.960) 0:00:24.166 ********* 2025-05-14 14:51:09.449507 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:51:09.449518 | orchestrator | 2025-05-14 14:51:09.449529 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-14 14:51:09.449539 | orchestrator | Wednesday 14 May 2025 14:46:49 +0000 (0:00:03.245) 0:00:27.411 ********* 2025-05-14 14:51:09.449550 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-14 14:51:09.449561 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-14 14:51:09.449572 | orchestrator | 2025-05-14 14:51:09.449582 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 14:51:09.449601 | orchestrator | Wednesday 14 May 2025 14:46:58 +0000 (0:00:08.458) 0:00:35.870 ********* 2025-05-14 14:51:09.449612 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.449623 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.449634 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.449644 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.449655 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.449666 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.449676 | orchestrator | 2025-05-14 14:51:09.449687 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-14 14:51:09.449698 | orchestrator | Wednesday 14 May 2025 14:46:59 +0000 (0:00:00.702) 0:00:36.572 ********* 2025-05-14 14:51:09.449709 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.449720 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.449730 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.449741 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.449752 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.449763 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.449773 | orchestrator | 2025-05-14 14:51:09.449784 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-14 14:51:09.449795 | orchestrator | Wednesday 14 May 2025 14:47:02 +0000 (0:00:03.107) 0:00:39.680 ********* 2025-05-14 14:51:09.449806 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:51:09.449816 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:51:09.449828 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:51:09.449838 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:51:09.449849 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:51:09.449860 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:51:09.449871 | orchestrator | 2025-05-14 14:51:09.449882 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 14:51:09.449893 | orchestrator | Wednesday 14 May 2025 14:47:03 +0000 (0:00:01.144) 0:00:40.824 ********* 2025-05-14 14:51:09.449903 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.449914 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.449925 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.449935 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.449946 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.449956 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.449967 | orchestrator | 2025-05-14 14:51:09.449978 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-14 14:51:09.449989 | orchestrator | Wednesday 14 May 2025 14:47:06 +0000 (0:00:02.794) 0:00:43.619 ********* 2025-05-14 14:51:09.450009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.450087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.450152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.450232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.450322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.450335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.450416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.450561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.450596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.450625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.450661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.450673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.450703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.450981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.450998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.451038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.451079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.451102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.451137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451179 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.451190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.451202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.451505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.451534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.451555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.451761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.451792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.451804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.451888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.451911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.451928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.451944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.452815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.452847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.452858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.452908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.452920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.452939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.452949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.452969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.452981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.452998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.453009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.453023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.453039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.453050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.453061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.453077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.453089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.453099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.453110 | orchestrator | 2025-05-14 14:51:09.453120 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-14 14:51:09.453130 | orchestrator | Wednesday 14 May 2025 14:47:09 +0000 (0:00:03.043) 0:00:46.663 ********* 2025-05-14 14:51:09.453140 | orchestrator | [WARNING]: Skipped 2025-05-14 14:51:09.453150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-14 14:51:09.453160 | orchestrator | due to this access issue: 2025-05-14 14:51:09.453175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-14 14:51:09.453310 | orchestrator | a directory 2025-05-14 14:51:09.453322 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:51:09.453332 | orchestrator | 2025-05-14 14:51:09.453342 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 14:51:09.453351 | orchestrator | Wednesday 14 May 2025 14:47:09 +0000 (0:00:00.565) 0:00:47.229 ********* 2025-05-14 14:51:09.453399 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:51:09.453412 | orchestrator | 2025-05-14 14:51:09.453462 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-14 14:51:09.453473 | orchestrator | Wednesday 14 May 2025 14:47:11 +0000 (0:00:01.519) 0:00:48.748 ********* 2025-05-14 14:51:09.453485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.453505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.453553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.453616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.453637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.453658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.453670 | orchestrator | 2025-05-14 14:51:09.453680 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-14 14:51:09.453690 | orchestrator | Wednesday 14 May 2025 14:47:16 +0000 (0:00:05.677) 0:00:54.426 ********* 2025-05-14 14:51:09.453700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.453711 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.453725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.453736 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.453751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.453772 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.453782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.453792 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.453802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.453840 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.453851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.453861 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.453902 | orchestrator | 2025-05-14 14:51:09.453912 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-14 14:51:09.453922 | orchestrator | Wednesday 14 May 2025 14:47:21 +0000 (0:00:04.478) 0:00:58.904 ********* 2025-05-14 14:51:09.453976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.453989 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.454005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.454130 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.454146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.454157 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.454167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.454178 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.454213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.454224 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.454234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.454251 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.454260 | orchestrator | 2025-05-14 14:51:09.454299 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-14 14:51:09.454309 | orchestrator | Wednesday 14 May 2025 14:47:26 +0000 (0:00:05.032) 0:01:03.936 ********* 2025-05-14 14:51:09.454319 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.454328 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.454338 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.454347 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.454356 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.454365 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.454375 | orchestrator | 2025-05-14 14:51:09.454384 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-14 14:51:09.454394 | orchestrator | Wednesday 14 May 2025 14:47:30 +0000 (0:00:04.382) 0:01:08.319 ********* 2025-05-14 14:51:09.454404 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.454413 | orchestrator | 2025-05-14 14:51:09.454422 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-14 14:51:09.454432 | orchestrator | Wednesday 14 May 2025 14:47:31 +0000 (0:00:00.218) 0:01:08.538 ********* 2025-05-14 14:51:09.454441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.454451 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.454460 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.454469 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.454533 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.454545 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.454554 | orchestrator | 2025-05-14 14:51:09.454564 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-14 14:51:09.454574 | orchestrator | Wednesday 14 May 2025 14:47:32 +0000 (0:00:01.203) 0:01:09.742 ********* 2025-05-14 14:51:09.454585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.454597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.454658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.454680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.454695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.454727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.454738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.454796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.454807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.454840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.454860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.454888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.454899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.454910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455036 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.455052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.455114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.455162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455188 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.455198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.455208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.455323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.455419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.455429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.455499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.455683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.455702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455740 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.455750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.455802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.455853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.455901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.455916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.455973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.455985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.455995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.456010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.456114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.456125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.456139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.456168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.456177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.456196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456296 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.456315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.456325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.456333 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.456341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456349 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.456357 | orchestrator | 2025-05-14 14:51:09.456365 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-14 14:51:09.456373 | orchestrator | Wednesday 14 May 2025 14:47:36 +0000 (0:00:04.171) 0:01:13.913 ********* 2025-05-14 14:51:09.456407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.456422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.456436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.456492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.456538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.456990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.456999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.457066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.457097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.457131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.457317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.457371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.457411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.457466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.457513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.457586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.457624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.457658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.457709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.457780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.457799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.457835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.457859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.457878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.457948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.457958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.457980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.457988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.458000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.458008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.458122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.458137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.458159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.458193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.458207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.458244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.458325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458343 | orchestrator | 2025-05-14 14:51:09.458357 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-14 14:51:09.458369 | orchestrator | Wednesday 14 May 2025 14:47:41 +0000 (0:00:04.666) 0:01:18.579 ********* 2025-05-14 14:51:09.458384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.458393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.458430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.459132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.459219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.459319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.459343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.459365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.459399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.459505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.459527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.459560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.459657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.459683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.459820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.459842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.459871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.459883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.459904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.459948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.459973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.459981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.460020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.460045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.460080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.460095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.460362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460384 | orchestrator | 2025-05-14 14:51:09.460391 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-14 14:51:09.460398 | orchestrator | Wednesday 14 May 2025 14:47:48 +0000 (0:00:07.613) 0:01:26.193 ********* 2025-05-14 14:51:09.460405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.460419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.460694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.460772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.460804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460825 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.460832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.460847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.460928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.460963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.460984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.460995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.461002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.461033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461051 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.461058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.461069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.461087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.461131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.461162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.461261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.461337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.461387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.461401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.461454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.461486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.461546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.461597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461612 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.461623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.461638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.461671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.461734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.461746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.461767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.461775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.461782 | orchestrator | 2025-05-14 14:51:09.461789 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-14 14:51:09.461795 | orchestrator | Wednesday 14 May 2025 14:47:52 +0000 (0:00:04.112) 0:01:30.305 ********* 2025-05-14 14:51:09.461802 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.461808 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.461815 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.461821 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:51:09.461827 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:51:09.461833 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:51:09.461839 | orchestrator | 2025-05-14 14:51:09.461846 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-14 14:51:09.461856 | orchestrator | Wednesday 14 May 2025 14:47:57 +0000 (0:00:04.830) 0:01:35.135 ********* 2025-05-14 14:51:09.461865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.462132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.462179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.462261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.462301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.462367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.462375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462386 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.462397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.462404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.462480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.462664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.462683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.462694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.462929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.462936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.462949 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.462956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.462966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.463047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.463064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.463071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.463476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.463527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.463538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.463725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.463741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463756 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.463763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.463770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.463995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.464029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.464131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.464189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.464214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.464302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.464410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.464431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.464480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.464539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.464880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.464886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.464902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.464908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.464918 | orchestrator | 2025-05-14 14:51:09.464924 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-14 14:51:09.464930 | orchestrator | Wednesday 14 May 2025 14:48:01 +0000 (0:00:04.052) 0:01:39.188 ********* 2025-05-14 14:51:09.464935 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.464986 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.464994 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465000 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465005 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465010 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465016 | orchestrator | 2025-05-14 14:51:09.465021 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-14 14:51:09.465026 | orchestrator | Wednesday 14 May 2025 14:48:04 +0000 (0:00:02.377) 0:01:41.565 ********* 2025-05-14 14:51:09.465032 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465042 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465047 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465053 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465073 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465080 | orchestrator | 2025-05-14 14:51:09.465085 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-14 14:51:09.465091 | orchestrator | Wednesday 14 May 2025 14:48:06 +0000 (0:00:02.079) 0:01:43.645 ********* 2025-05-14 14:51:09.465096 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465102 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465107 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465146 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465152 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465157 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465163 | orchestrator | 2025-05-14 14:51:09.465168 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-14 14:51:09.465174 | orchestrator | Wednesday 14 May 2025 14:48:08 +0000 (0:00:02.027) 0:01:45.672 ********* 2025-05-14 14:51:09.465179 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465184 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465190 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465200 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465206 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465211 | orchestrator | 2025-05-14 14:51:09.465216 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-14 14:51:09.465222 | orchestrator | Wednesday 14 May 2025 14:48:10 +0000 (0:00:01.799) 0:01:47.472 ********* 2025-05-14 14:51:09.465227 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465232 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465238 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465243 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465248 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465253 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465259 | orchestrator | 2025-05-14 14:51:09.465391 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-14 14:51:09.465404 | orchestrator | Wednesday 14 May 2025 14:48:12 +0000 (0:00:02.067) 0:01:49.540 ********* 2025-05-14 14:51:09.465409 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465420 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465426 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465431 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465442 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465448 | orchestrator | 2025-05-14 14:51:09.465453 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-14 14:51:09.465458 | orchestrator | Wednesday 14 May 2025 14:48:15 +0000 (0:00:02.943) 0:01:52.483 ********* 2025-05-14 14:51:09.465464 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465470 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.465475 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465480 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.465486 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465491 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.465502 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465507 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465512 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.465518 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 14:51:09.465523 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.465528 | orchestrator | 2025-05-14 14:51:09.465537 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-14 14:51:09.465543 | orchestrator | Wednesday 14 May 2025 14:48:17 +0000 (0:00:02.229) 0:01:54.712 ********* 2025-05-14 14:51:09.465605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.465620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.465662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.465717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.465723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.465739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.465754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.465795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.465815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.465821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465827 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.465836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.465877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.465911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.465957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.465966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.465981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.466058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.466164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466201 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.466216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.466226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.466334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.466427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.466510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.466539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.466553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.466637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.466728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.466790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466802 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.466808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.466814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.466883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.466961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.466972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.466978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.466987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.467032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.467041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467076 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.467085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.467146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.467334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.467393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467512 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.467522 | orchestrator | 2025-05-14 14:51:09.467532 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-14 14:51:09.467541 | orchestrator | Wednesday 14 May 2025 14:48:19 +0000 (0:00:01.919) 0:01:56.632 ********* 2025-05-14 14:51:09.467550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.467559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.467649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.467734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.467773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467820 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.467825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.467831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.467884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.467918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.467963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.467968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.467973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.467992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468014 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.468071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.468081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.468144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.468264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.468351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468374 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.468440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.468452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.468481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.468658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.468667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.468877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.468935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.468942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.468948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.468961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.468969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469030 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.469046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.469052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469057 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.469110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.469155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.469174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.469200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.469212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.469223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.469232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.469262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.469322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.469329 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469334 | orchestrator | 2025-05-14 14:51:09.469339 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-14 14:51:09.469344 | orchestrator | Wednesday 14 May 2025 14:48:21 +0000 (0:00:02.133) 0:01:58.765 ********* 2025-05-14 14:51:09.469349 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469354 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469358 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469363 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469368 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469373 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469377 | orchestrator | 2025-05-14 14:51:09.469382 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-14 14:51:09.469387 | orchestrator | Wednesday 14 May 2025 14:48:23 +0000 (0:00:01.777) 0:02:00.543 ********* 2025-05-14 14:51:09.469392 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469397 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469401 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469406 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:51:09.469411 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:51:09.469416 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:51:09.469420 | orchestrator | 2025-05-14 14:51:09.469425 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-14 14:51:09.469441 | orchestrator | Wednesday 14 May 2025 14:48:30 +0000 (0:00:07.808) 0:02:08.352 ********* 2025-05-14 14:51:09.469450 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469458 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469466 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469474 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469483 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469496 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469505 | orchestrator | 2025-05-14 14:51:09.469512 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-14 14:51:09.469521 | orchestrator | Wednesday 14 May 2025 14:48:32 +0000 (0:00:01.737) 0:02:10.090 ********* 2025-05-14 14:51:09.469530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469538 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469546 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469569 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469578 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469585 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469594 | orchestrator | 2025-05-14 14:51:09.469601 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-14 14:51:09.469609 | orchestrator | Wednesday 14 May 2025 14:48:34 +0000 (0:00:01.906) 0:02:11.996 ********* 2025-05-14 14:51:09.469617 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469625 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469633 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469644 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469652 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469660 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469668 | orchestrator | 2025-05-14 14:51:09.469677 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-14 14:51:09.469685 | orchestrator | Wednesday 14 May 2025 14:48:36 +0000 (0:00:02.163) 0:02:14.160 ********* 2025-05-14 14:51:09.469729 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469740 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469745 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469751 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469756 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469762 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469767 | orchestrator | 2025-05-14 14:51:09.469772 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-14 14:51:09.469778 | orchestrator | Wednesday 14 May 2025 14:48:38 +0000 (0:00:02.155) 0:02:16.315 ********* 2025-05-14 14:51:09.469783 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469788 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469793 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469799 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469804 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469809 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469815 | orchestrator | 2025-05-14 14:51:09.469820 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-14 14:51:09.469825 | orchestrator | Wednesday 14 May 2025 14:48:41 +0000 (0:00:02.689) 0:02:19.005 ********* 2025-05-14 14:51:09.469831 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469836 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469841 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469847 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469852 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469858 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469862 | orchestrator | 2025-05-14 14:51:09.469867 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-14 14:51:09.469873 | orchestrator | Wednesday 14 May 2025 14:48:45 +0000 (0:00:04.128) 0:02:23.133 ********* 2025-05-14 14:51:09.469878 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469891 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469899 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469907 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.469914 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469922 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469930 | orchestrator | 2025-05-14 14:51:09.469937 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-14 14:51:09.469945 | orchestrator | Wednesday 14 May 2025 14:48:48 +0000 (0:00:03.318) 0:02:26.452 ********* 2025-05-14 14:51:09.469953 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.469960 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.469968 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.469976 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.469985 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.469993 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.470001 | orchestrator | 2025-05-14 14:51:09.470009 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-14 14:51:09.470040 | orchestrator | Wednesday 14 May 2025 14:48:52 +0000 (0:00:03.378) 0:02:29.831 ********* 2025-05-14 14:51:09.470050 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470059 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.470068 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470076 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.470084 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470091 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.470096 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470101 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.470105 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470110 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.470114 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 14:51:09.470119 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.470123 | orchestrator | 2025-05-14 14:51:09.470127 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-14 14:51:09.470132 | orchestrator | Wednesday 14 May 2025 14:48:55 +0000 (0:00:03.161) 0:02:32.993 ********* 2025-05-14 14:51:09.470143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.470180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.470227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.470336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.470391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470407 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.470419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.470447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.470480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.470537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.470575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.470595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470603 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.470622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.470638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.470691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.470728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470738 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.470743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.470750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.470799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.470808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.470909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.470924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.470957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.470973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.470989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.470994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471046 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.471051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471075 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.471080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.471085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471224 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.471229 | orchestrator | 2025-05-14 14:51:09.471234 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-14 14:51:09.471238 | orchestrator | Wednesday 14 May 2025 14:48:58 +0000 (0:00:02.945) 0:02:35.938 ********* 2025-05-14 14:51:09.471243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.471252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.471378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 14:51:09.471514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.471575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.471583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.471616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.471791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 14:51:09.471801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 14:51:09.471875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.471954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.471968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.471973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 14:51:09.471984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.471989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:51:09.471993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:51:09.472004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.472011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 14:51:09.472015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 14:51:09.472020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 14:51:09.472024 | orchestrator | 2025-05-14 14:51:09.472028 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 14:51:09.472033 | orchestrator | Wednesday 14 May 2025 14:49:02 +0000 (0:00:03.722) 0:02:39.661 ********* 2025-05-14 14:51:09.472037 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:51:09.472041 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:51:09.472045 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:51:09.472049 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:51:09.472053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:51:09.472057 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:51:09.472062 | orchestrator | 2025-05-14 14:51:09.472066 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-14 14:51:09.472070 | orchestrator | Wednesday 14 May 2025 14:49:02 +0000 (0:00:00.522) 0:02:40.184 ********* 2025-05-14 14:51:09.472077 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:51:09.472081 | orchestrator | 2025-05-14 14:51:09.472085 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-14 14:51:09.472089 | orchestrator | Wednesday 14 May 2025 14:49:05 +0000 (0:00:02.671) 0:02:42.856 ********* 2025-05-14 14:51:09.472093 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:51:09.472097 | orchestrator | 2025-05-14 14:51:09.472101 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-14 14:51:09.472105 | orchestrator | Wednesday 14 May 2025 14:49:07 +0000 (0:00:02.388) 0:02:45.244 ********* 2025-05-14 14:51:09.472109 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:51:09.472113 | orchestrator | 2025-05-14 14:51:09.472117 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472122 | orchestrator | Wednesday 14 May 2025 14:49:47 +0000 (0:00:40.131) 0:03:25.376 ********* 2025-05-14 14:51:09.472126 | orchestrator | 2025-05-14 14:51:09.472130 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472134 | orchestrator | Wednesday 14 May 2025 14:49:47 +0000 (0:00:00.060) 0:03:25.437 ********* 2025-05-14 14:51:09.472138 | orchestrator | 2025-05-14 14:51:09.472142 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472146 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:00.292) 0:03:25.729 ********* 2025-05-14 14:51:09.472150 | orchestrator | 2025-05-14 14:51:09.472154 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472158 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:00.068) 0:03:25.798 ********* 2025-05-14 14:51:09.472162 | orchestrator | 2025-05-14 14:51:09.472169 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472173 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:00.059) 0:03:25.857 ********* 2025-05-14 14:51:09.472177 | orchestrator | 2025-05-14 14:51:09.472181 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 14:51:09.472185 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:00.058) 0:03:25.916 ********* 2025-05-14 14:51:09.472189 | orchestrator | 2025-05-14 14:51:09.472193 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-14 14:51:09.472197 | orchestrator | Wednesday 14 May 2025 14:49:48 +0000 (0:00:00.374) 0:03:26.291 ********* 2025-05-14 14:51:09.472201 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:51:09.472205 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:51:09.472209 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:51:09.472213 | orchestrator | 2025-05-14 14:51:09.472217 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-14 14:51:09.472222 | orchestrator | Wednesday 14 May 2025 14:50:17 +0000 (0:00:29.070) 0:03:55.361 ********* 2025-05-14 14:51:09.472226 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:51:09.472230 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:51:09.472234 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:51:09.472238 | orchestrator | 2025-05-14 14:51:09.472242 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:51:09.472248 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 14:51:09.472254 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 14:51:09.472258 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 14:51:09.472262 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 14:51:09.472281 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 14:51:09.472288 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 14:51:09.472292 | orchestrator | 2025-05-14 14:51:09.472296 | orchestrator | 2025-05-14 14:51:09.472300 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:51:09.472305 | orchestrator | Wednesday 14 May 2025 14:51:08 +0000 (0:00:50.646) 0:04:46.007 ********* 2025-05-14 14:51:09.472309 | orchestrator | =============================================================================== 2025-05-14 14:51:09.472313 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.65s 2025-05-14 14:51:09.472317 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.13s 2025-05-14 14:51:09.472321 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.07s 2025-05-14 14:51:09.472325 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.46s 2025-05-14 14:51:09.472329 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.81s 2025-05-14 14:51:09.472333 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.61s 2025-05-14 14:51:09.472337 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.53s 2025-05-14 14:51:09.472341 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.68s 2025-05-14 14:51:09.472345 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.03s 2025-05-14 14:51:09.472350 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.83s 2025-05-14 14:51:09.472354 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.67s 2025-05-14 14:51:09.472358 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.48s 2025-05-14 14:51:09.472362 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.38s 2025-05-14 14:51:09.472366 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.17s 2025-05-14 14:51:09.472370 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.13s 2025-05-14 14:51:09.472374 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.11s 2025-05-14 14:51:09.472378 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.05s 2025-05-14 14:51:09.472382 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.96s 2025-05-14 14:51:09.472386 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.72s 2025-05-14 14:51:09.472390 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.56s 2025-05-14 14:51:09.472394 | orchestrator | 2025-05-14 14:51:09 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:09.472399 | orchestrator | 2025-05-14 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:12.500324 | orchestrator | 2025-05-14 14:51:12 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:12.500706 | orchestrator | 2025-05-14 14:51:12 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:12.501438 | orchestrator | 2025-05-14 14:51:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:12.503418 | orchestrator | 2025-05-14 14:51:12 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:12.503446 | orchestrator | 2025-05-14 14:51:12 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:12.503458 | orchestrator | 2025-05-14 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:15.548100 | orchestrator | 2025-05-14 14:51:15 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:15.548589 | orchestrator | 2025-05-14 14:51:15 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:15.549356 | orchestrator | 2025-05-14 14:51:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:15.550194 | orchestrator | 2025-05-14 14:51:15 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:15.551058 | orchestrator | 2025-05-14 14:51:15 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:15.551100 | orchestrator | 2025-05-14 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:18.572807 | orchestrator | 2025-05-14 14:51:18 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:18.573511 | orchestrator | 2025-05-14 14:51:18 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:18.573551 | orchestrator | 2025-05-14 14:51:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:18.573861 | orchestrator | 2025-05-14 14:51:18 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:18.575559 | orchestrator | 2025-05-14 14:51:18 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:18.575595 | orchestrator | 2025-05-14 14:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:21.608996 | orchestrator | 2025-05-14 14:51:21 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:21.609094 | orchestrator | 2025-05-14 14:51:21 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:21.613409 | orchestrator | 2025-05-14 14:51:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:21.613725 | orchestrator | 2025-05-14 14:51:21 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:21.615435 | orchestrator | 2025-05-14 14:51:21 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:21.615461 | orchestrator | 2025-05-14 14:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:24.644694 | orchestrator | 2025-05-14 14:51:24 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:24.644783 | orchestrator | 2025-05-14 14:51:24 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:24.645197 | orchestrator | 2025-05-14 14:51:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:24.645520 | orchestrator | 2025-05-14 14:51:24 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:24.646130 | orchestrator | 2025-05-14 14:51:24 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:24.646377 | orchestrator | 2025-05-14 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:27.679914 | orchestrator | 2025-05-14 14:51:27 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:27.680003 | orchestrator | 2025-05-14 14:51:27 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:27.680317 | orchestrator | 2025-05-14 14:51:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:27.681352 | orchestrator | 2025-05-14 14:51:27 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:27.688231 | orchestrator | 2025-05-14 14:51:27 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:27.688319 | orchestrator | 2025-05-14 14:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:30.714631 | orchestrator | 2025-05-14 14:51:30 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:30.714856 | orchestrator | 2025-05-14 14:51:30 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:30.714892 | orchestrator | 2025-05-14 14:51:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:30.715422 | orchestrator | 2025-05-14 14:51:30 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:30.716122 | orchestrator | 2025-05-14 14:51:30 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:30.716144 | orchestrator | 2025-05-14 14:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:33.748729 | orchestrator | 2025-05-14 14:51:33 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:33.749128 | orchestrator | 2025-05-14 14:51:33 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:33.750085 | orchestrator | 2025-05-14 14:51:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:33.750665 | orchestrator | 2025-05-14 14:51:33 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:33.751926 | orchestrator | 2025-05-14 14:51:33 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:33.752019 | orchestrator | 2025-05-14 14:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:36.777185 | orchestrator | 2025-05-14 14:51:36 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:36.777362 | orchestrator | 2025-05-14 14:51:36 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:36.777893 | orchestrator | 2025-05-14 14:51:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:36.778402 | orchestrator | 2025-05-14 14:51:36 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:36.778922 | orchestrator | 2025-05-14 14:51:36 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:36.778944 | orchestrator | 2025-05-14 14:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:39.804375 | orchestrator | 2025-05-14 14:51:39 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:39.805943 | orchestrator | 2025-05-14 14:51:39 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:39.805972 | orchestrator | 2025-05-14 14:51:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:39.805984 | orchestrator | 2025-05-14 14:51:39 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:39.805995 | orchestrator | 2025-05-14 14:51:39 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:39.806006 | orchestrator | 2025-05-14 14:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:42.834489 | orchestrator | 2025-05-14 14:51:42 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:42.834712 | orchestrator | 2025-05-14 14:51:42 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:42.835184 | orchestrator | 2025-05-14 14:51:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:42.835685 | orchestrator | 2025-05-14 14:51:42 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:42.836504 | orchestrator | 2025-05-14 14:51:42 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:42.836583 | orchestrator | 2025-05-14 14:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:45.864232 | orchestrator | 2025-05-14 14:51:45 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:45.864930 | orchestrator | 2025-05-14 14:51:45 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:45.865429 | orchestrator | 2025-05-14 14:51:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:45.867005 | orchestrator | 2025-05-14 14:51:45 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:45.867493 | orchestrator | 2025-05-14 14:51:45 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:45.867517 | orchestrator | 2025-05-14 14:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:48.892188 | orchestrator | 2025-05-14 14:51:48 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:48.892363 | orchestrator | 2025-05-14 14:51:48 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:48.892609 | orchestrator | 2025-05-14 14:51:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:48.893401 | orchestrator | 2025-05-14 14:51:48 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:48.894882 | orchestrator | 2025-05-14 14:51:48 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:48.894910 | orchestrator | 2025-05-14 14:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:51.921130 | orchestrator | 2025-05-14 14:51:51 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:51.921354 | orchestrator | 2025-05-14 14:51:51 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:51.921867 | orchestrator | 2025-05-14 14:51:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:51.923006 | orchestrator | 2025-05-14 14:51:51 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:51.923376 | orchestrator | 2025-05-14 14:51:51 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:51.923484 | orchestrator | 2025-05-14 14:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:54.962231 | orchestrator | 2025-05-14 14:51:54 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:54.962529 | orchestrator | 2025-05-14 14:51:54 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:54.963035 | orchestrator | 2025-05-14 14:51:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:54.963644 | orchestrator | 2025-05-14 14:51:54 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:54.964181 | orchestrator | 2025-05-14 14:51:54 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:54.964209 | orchestrator | 2025-05-14 14:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:51:57.986624 | orchestrator | 2025-05-14 14:51:57 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:51:57.986738 | orchestrator | 2025-05-14 14:51:57 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:51:57.987004 | orchestrator | 2025-05-14 14:51:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:51:57.987496 | orchestrator | 2025-05-14 14:51:57 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:51:57.987960 | orchestrator | 2025-05-14 14:51:57 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:51:57.987981 | orchestrator | 2025-05-14 14:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:01.014843 | orchestrator | 2025-05-14 14:52:01 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:01.015025 | orchestrator | 2025-05-14 14:52:01 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:01.015581 | orchestrator | 2025-05-14 14:52:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:01.016388 | orchestrator | 2025-05-14 14:52:01 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:01.016504 | orchestrator | 2025-05-14 14:52:01 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:01.016608 | orchestrator | 2025-05-14 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:04.052776 | orchestrator | 2025-05-14 14:52:04 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:04.052924 | orchestrator | 2025-05-14 14:52:04 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:04.053434 | orchestrator | 2025-05-14 14:52:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:04.053833 | orchestrator | 2025-05-14 14:52:04 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:04.054520 | orchestrator | 2025-05-14 14:52:04 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:04.054584 | orchestrator | 2025-05-14 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:07.081669 | orchestrator | 2025-05-14 14:52:07 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:07.081785 | orchestrator | 2025-05-14 14:52:07 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:07.082434 | orchestrator | 2025-05-14 14:52:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:07.083485 | orchestrator | 2025-05-14 14:52:07 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:07.084152 | orchestrator | 2025-05-14 14:52:07 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:07.084174 | orchestrator | 2025-05-14 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:10.113320 | orchestrator | 2025-05-14 14:52:10 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:10.113542 | orchestrator | 2025-05-14 14:52:10 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:10.114512 | orchestrator | 2025-05-14 14:52:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:10.115263 | orchestrator | 2025-05-14 14:52:10 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:10.115762 | orchestrator | 2025-05-14 14:52:10 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:10.115817 | orchestrator | 2025-05-14 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:13.148126 | orchestrator | 2025-05-14 14:52:13 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:13.148522 | orchestrator | 2025-05-14 14:52:13 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:13.150438 | orchestrator | 2025-05-14 14:52:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:13.151654 | orchestrator | 2025-05-14 14:52:13 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:13.152231 | orchestrator | 2025-05-14 14:52:13 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:13.153203 | orchestrator | 2025-05-14 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:16.190439 | orchestrator | 2025-05-14 14:52:16 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:16.190525 | orchestrator | 2025-05-14 14:52:16 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:16.192668 | orchestrator | 2025-05-14 14:52:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:16.194718 | orchestrator | 2025-05-14 14:52:16 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:16.196304 | orchestrator | 2025-05-14 14:52:16 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:16.196450 | orchestrator | 2025-05-14 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:19.234364 | orchestrator | 2025-05-14 14:52:19 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:19.236148 | orchestrator | 2025-05-14 14:52:19 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:19.238493 | orchestrator | 2025-05-14 14:52:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:19.242555 | orchestrator | 2025-05-14 14:52:19 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:19.245685 | orchestrator | 2025-05-14 14:52:19 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:19.246261 | orchestrator | 2025-05-14 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:22.298343 | orchestrator | 2025-05-14 14:52:22 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:22.298531 | orchestrator | 2025-05-14 14:52:22 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:22.303318 | orchestrator | 2025-05-14 14:52:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:22.306602 | orchestrator | 2025-05-14 14:52:22 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:22.308131 | orchestrator | 2025-05-14 14:52:22 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:22.308162 | orchestrator | 2025-05-14 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:25.356163 | orchestrator | 2025-05-14 14:52:25 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:25.357494 | orchestrator | 2025-05-14 14:52:25 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:25.358845 | orchestrator | 2025-05-14 14:52:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:25.360801 | orchestrator | 2025-05-14 14:52:25 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:25.364659 | orchestrator | 2025-05-14 14:52:25 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:25.364695 | orchestrator | 2025-05-14 14:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:28.424590 | orchestrator | 2025-05-14 14:52:28 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:28.424893 | orchestrator | 2025-05-14 14:52:28 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:28.425600 | orchestrator | 2025-05-14 14:52:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:28.427635 | orchestrator | 2025-05-14 14:52:28 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:28.428548 | orchestrator | 2025-05-14 14:52:28 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:28.428586 | orchestrator | 2025-05-14 14:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:31.466511 | orchestrator | 2025-05-14 14:52:31 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:31.466611 | orchestrator | 2025-05-14 14:52:31 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:31.466990 | orchestrator | 2025-05-14 14:52:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:31.467573 | orchestrator | 2025-05-14 14:52:31 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:31.468299 | orchestrator | 2025-05-14 14:52:31 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:31.468381 | orchestrator | 2025-05-14 14:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:34.515387 | orchestrator | 2025-05-14 14:52:34 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:34.517658 | orchestrator | 2025-05-14 14:52:34 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:34.519254 | orchestrator | 2025-05-14 14:52:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:34.520829 | orchestrator | 2025-05-14 14:52:34 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:34.522682 | orchestrator | 2025-05-14 14:52:34 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:34.522953 | orchestrator | 2025-05-14 14:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:37.564003 | orchestrator | 2025-05-14 14:52:37 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:37.566125 | orchestrator | 2025-05-14 14:52:37 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:37.567866 | orchestrator | 2025-05-14 14:52:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:37.570224 | orchestrator | 2025-05-14 14:52:37 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:37.571950 | orchestrator | 2025-05-14 14:52:37 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:37.572265 | orchestrator | 2025-05-14 14:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:40.631463 | orchestrator | 2025-05-14 14:52:40 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:40.633771 | orchestrator | 2025-05-14 14:52:40 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:40.635085 | orchestrator | 2025-05-14 14:52:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:40.637150 | orchestrator | 2025-05-14 14:52:40 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:40.638332 | orchestrator | 2025-05-14 14:52:40 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:40.638726 | orchestrator | 2025-05-14 14:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:43.691338 | orchestrator | 2025-05-14 14:52:43 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:43.692403 | orchestrator | 2025-05-14 14:52:43 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:43.693754 | orchestrator | 2025-05-14 14:52:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:43.695257 | orchestrator | 2025-05-14 14:52:43 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:43.697358 | orchestrator | 2025-05-14 14:52:43 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:43.697457 | orchestrator | 2025-05-14 14:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:46.763364 | orchestrator | 2025-05-14 14:52:46 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:46.764081 | orchestrator | 2025-05-14 14:52:46 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:46.765948 | orchestrator | 2025-05-14 14:52:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:46.768212 | orchestrator | 2025-05-14 14:52:46 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:46.769747 | orchestrator | 2025-05-14 14:52:46 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:46.769849 | orchestrator | 2025-05-14 14:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:49.818088 | orchestrator | 2025-05-14 14:52:49 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:49.818192 | orchestrator | 2025-05-14 14:52:49 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:49.818824 | orchestrator | 2025-05-14 14:52:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:49.820995 | orchestrator | 2025-05-14 14:52:49 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:49.822132 | orchestrator | 2025-05-14 14:52:49 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:49.822248 | orchestrator | 2025-05-14 14:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:52.862673 | orchestrator | 2025-05-14 14:52:52 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:52.864222 | orchestrator | 2025-05-14 14:52:52 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:52.865306 | orchestrator | 2025-05-14 14:52:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:52.865648 | orchestrator | 2025-05-14 14:52:52 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:52.867047 | orchestrator | 2025-05-14 14:52:52 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:52.867101 | orchestrator | 2025-05-14 14:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:55.906732 | orchestrator | 2025-05-14 14:52:55 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:55.908012 | orchestrator | 2025-05-14 14:52:55 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:55.908654 | orchestrator | 2025-05-14 14:52:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:55.909181 | orchestrator | 2025-05-14 14:52:55 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:55.910636 | orchestrator | 2025-05-14 14:52:55 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:55.910663 | orchestrator | 2025-05-14 14:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:52:58.970640 | orchestrator | 2025-05-14 14:52:58 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:52:58.971804 | orchestrator | 2025-05-14 14:52:58 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:52:58.974242 | orchestrator | 2025-05-14 14:52:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:52:58.975756 | orchestrator | 2025-05-14 14:52:58 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:52:58.978311 | orchestrator | 2025-05-14 14:52:58 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:52:58.978350 | orchestrator | 2025-05-14 14:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:02.016446 | orchestrator | 2025-05-14 14:53:02 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:02.021284 | orchestrator | 2025-05-14 14:53:02 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:02.021647 | orchestrator | 2025-05-14 14:53:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:02.022446 | orchestrator | 2025-05-14 14:53:02 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state STARTED 2025-05-14 14:53:02.022969 | orchestrator | 2025-05-14 14:53:02 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:02.022996 | orchestrator | 2025-05-14 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:05.051939 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:05.052028 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:05.052724 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:05.054573 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task ce6b534f-b977-402e-98c7-ce135e5b5bf9 is in state SUCCESS 2025-05-14 14:53:05.055812 | orchestrator | 2025-05-14 14:53:05.055842 | orchestrator | 2025-05-14 14:53:05.055854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:53:05.055865 | orchestrator | 2025-05-14 14:53:05.055877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:53:05.055888 | orchestrator | Wednesday 14 May 2025 14:48:45 +0000 (0:00:00.349) 0:00:00.349 ********* 2025-05-14 14:53:05.055900 | orchestrator | ok: [testbed-manager] 2025-05-14 14:53:05.055912 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:53:05.055923 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:53:05.055934 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:53:05.055946 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:53:05.055957 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:53:05.055968 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:53:05.055979 | orchestrator | 2025-05-14 14:53:05.056138 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:53:05.056152 | orchestrator | Wednesday 14 May 2025 14:48:46 +0000 (0:00:01.010) 0:00:01.360 ********* 2025-05-14 14:53:05.056163 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056175 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056186 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056197 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056207 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056218 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056228 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-14 14:53:05.056239 | orchestrator | 2025-05-14 14:53:05.056250 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-14 14:53:05.056301 | orchestrator | 2025-05-14 14:53:05.056312 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 14:53:05.056323 | orchestrator | Wednesday 14 May 2025 14:48:48 +0000 (0:00:01.813) 0:00:03.173 ********* 2025-05-14 14:53:05.056335 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:53:05.056347 | orchestrator | 2025-05-14 14:53:05.056385 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-14 14:53:05.056399 | orchestrator | Wednesday 14 May 2025 14:48:50 +0000 (0:00:02.112) 0:00:05.286 ********* 2025-05-14 14:53:05.056414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:53:05.056534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056583 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.056698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.056711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.056727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.056859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:53:05.056926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056953 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.056965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056987 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.056999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.057010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.057026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.057038 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.057062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.057074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.057086 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.057114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.057126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.057150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.057162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.057175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.057203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.057226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.057356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.057385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.057403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.058369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.058402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.058475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.058528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.058601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.058613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.058662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.058674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.058698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.058711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.058731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.058744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.058789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.058813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.058845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.058857 | orchestrator | 2025-05-14 14:53:05.058871 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 14:53:05.058883 | orchestrator | Wednesday 14 May 2025 14:48:55 +0000 (0:00:04.496) 0:00:09.783 ********* 2025-05-14 14:53:05.058895 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:53:05.058907 | orchestrator | 2025-05-14 14:53:05.058918 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-14 14:53:05.058929 | orchestrator | Wednesday 14 May 2025 14:48:57 +0000 (0:00:02.687) 0:00:12.471 ********* 2025-05-14 14:53:05.058941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:53:05.058963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.058975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.058987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.059005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.059017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.059028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.059040 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.059058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059157 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:53:05.059174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.059343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.059385 | orchestrator | 2025-05-14 14:53:05.059397 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-14 14:53:05.059408 | orchestrator | Wednesday 14 May 2025 14:49:03 +0000 (0:00:05.960) 0:00:18.431 ********* 2025-05-14 14:53:05.059420 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.059436 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059448 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.059480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059631 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.059643 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.059654 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.059665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059704 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.059722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.059786 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.059801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.059860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059896 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.059906 | orchestrator | 2025-05-14 14:53:05.059918 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-14 14:53:05.059929 | orchestrator | Wednesday 14 May 2025 14:49:05 +0000 (0:00:02.185) 0:00:20.617 ********* 2025-05-14 14:53:05.059941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.059957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.059969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.059981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060053 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.060069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.060082 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060102 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.060119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060177 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.060195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.060301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060312 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.060324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060347 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.060363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060411 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.060423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 14:53:05.060435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.060457 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.060469 | orchestrator | 2025-05-14 14:53:05.060480 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-14 14:53:05.060491 | orchestrator | Wednesday 14 May 2025 14:49:08 +0000 (0:00:02.923) 0:00:23.540 ********* 2025-05-14 14:53:05.060507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.060524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.061416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.061445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.061458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:53:05.061488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.061510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.061530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061566 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.061612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.061710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.061751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.061764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.061775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.061787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.061804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.061821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.061839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.061851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.061862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.061900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.061947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.061967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.061978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.061989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062007 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:53:05.062057 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.062077 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.062149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.062167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062283 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.062296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.062343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.062355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.062371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.062386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.062397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.062434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.062485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.062510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.062537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.062547 | orchestrator | 2025-05-14 14:53:05.062557 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-14 14:53:05.062568 | orchestrator | Wednesday 14 May 2025 14:49:15 +0000 (0:00:06.900) 0:00:30.440 ********* 2025-05-14 14:53:05.062578 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:53:05.062588 | orchestrator | 2025-05-14 14:53:05.062598 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-14 14:53:05.062608 | orchestrator | Wednesday 14 May 2025 14:49:16 +0000 (0:00:00.676) 0:00:31.117 ********* 2025-05-14 14:53:05.062618 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062632 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062643 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062653 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062687 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062697 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062721 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062742 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062757 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1081430, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1766915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.062784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062795 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062805 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062819 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062861 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062871 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062881 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062906 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062916 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062937 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062968 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062978 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.062992 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063107 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063118 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063128 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1081444, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063139 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063164 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063205 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063218 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063229 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063243 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063267 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063284 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063300 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063339 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063351 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063361 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063382 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063396 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1081434, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1776915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063412 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063447 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063459 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063470 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063481 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063508 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063524 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063558 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063581 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063601 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.063612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063627 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.063641 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063651 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.063661 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063708 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.063718 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.063728 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 14:53:05.063737 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.063747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081443, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063758 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081482, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063773 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081449, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063788 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081441, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1806915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063798 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081445, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1816914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063833 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081478, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1896915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081437, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1786914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1081454, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1836915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 14:53:05.063866 | orchestrator | 2025-05-14 14:53:05.063876 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-14 14:53:05.063886 | orchestrator | Wednesday 14 May 2025 14:49:56 +0000 (0:00:39.702) 0:01:10.819 ********* 2025-05-14 14:53:05.063896 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:53:05.063905 | orchestrator | 2025-05-14 14:53:05.063915 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-14 14:53:05.063928 | orchestrator | Wednesday 14 May 2025 14:49:56 +0000 (0:00:00.463) 0:01:11.282 ********* 2025-05-14 14:53:05.063939 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.063949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.063958 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.063968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.063978 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.063987 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:53:05.063997 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064007 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064016 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064026 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064035 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064045 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:53:05.064055 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064078 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064098 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064107 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 14:53:05.064117 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064136 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064155 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064164 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064184 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064202 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064212 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064222 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064231 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064250 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064275 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.064285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064295 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-14 14:53:05.064336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 14:53:05.064348 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-14 14:53:05.064358 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 14:53:05.064368 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:53:05.064377 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 14:53:05.064387 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 14:53:05.064396 | orchestrator | 2025-05-14 14:53:05.064405 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-14 14:53:05.064421 | orchestrator | Wednesday 14 May 2025 14:49:57 +0000 (0:00:01.387) 0:01:12.669 ********* 2025-05-14 14:53:05.064431 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.064450 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064460 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.064470 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064480 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.064489 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064499 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.064509 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064518 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.064528 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 14:53:05.064538 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.064547 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-14 14:53:05.064557 | orchestrator | 2025-05-14 14:53:05.064566 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-14 14:53:05.064576 | orchestrator | Wednesday 14 May 2025 14:50:13 +0000 (0:00:15.519) 0:01:28.188 ********* 2025-05-14 14:53:05.064586 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064596 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.064605 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064615 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.064624 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064633 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.064643 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064652 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.064662 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064671 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.064681 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 14:53:05.064691 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.064700 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-14 14:53:05.064710 | orchestrator | 2025-05-14 14:53:05.064719 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-14 14:53:05.064733 | orchestrator | Wednesday 14 May 2025 14:50:18 +0000 (0:00:05.054) 0:01:33.242 ********* 2025-05-14 14:53:05.064743 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.064763 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064773 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064783 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.064792 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.064802 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064817 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.064827 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064836 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.064846 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 14:53:05.064856 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.064865 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-14 14:53:05.064875 | orchestrator | 2025-05-14 14:53:05.064885 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-14 14:53:05.064894 | orchestrator | Wednesday 14 May 2025 14:50:25 +0000 (0:00:07.043) 0:01:40.286 ********* 2025-05-14 14:53:05.064904 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:53:05.064914 | orchestrator | 2025-05-14 14:53:05.064927 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-14 14:53:05.064938 | orchestrator | Wednesday 14 May 2025 14:50:26 +0000 (0:00:00.694) 0:01:40.980 ********* 2025-05-14 14:53:05.064947 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.064957 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.064966 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.064976 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.064985 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.064995 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065004 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065013 | orchestrator | 2025-05-14 14:53:05.065023 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-14 14:53:05.065032 | orchestrator | Wednesday 14 May 2025 14:50:27 +0000 (0:00:00.917) 0:01:41.898 ********* 2025-05-14 14:53:05.065042 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.065051 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065061 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065070 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065079 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.065089 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.065098 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.065107 | orchestrator | 2025-05-14 14:53:05.065117 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-14 14:53:05.065127 | orchestrator | Wednesday 14 May 2025 14:50:30 +0000 (0:00:03.785) 0:01:45.683 ********* 2025-05-14 14:53:05.065136 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065146 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065165 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065175 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065184 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065194 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065203 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065213 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065222 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065232 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065241 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065251 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 14:53:05.065316 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.065326 | orchestrator | 2025-05-14 14:53:05.065336 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-14 14:53:05.065346 | orchestrator | Wednesday 14 May 2025 14:50:33 +0000 (0:00:02.765) 0:01:48.449 ********* 2025-05-14 14:53:05.065355 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065365 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065374 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065384 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065393 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065407 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065417 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065426 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065436 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065445 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065455 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 14:53:05.065464 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065474 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-14 14:53:05.065484 | orchestrator | 2025-05-14 14:53:05.065493 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-14 14:53:05.065503 | orchestrator | Wednesday 14 May 2025 14:50:37 +0000 (0:00:03.398) 0:01:51.847 ********* 2025-05-14 14:53:05.065512 | orchestrator | [WARNING]: Skipped 2025-05-14 14:53:05.065522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-14 14:53:05.065531 | orchestrator | due to this access issue: 2025-05-14 14:53:05.065541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-14 14:53:05.065551 | orchestrator | not a directory 2025-05-14 14:53:05.065561 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 14:53:05.065570 | orchestrator | 2025-05-14 14:53:05.065580 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-14 14:53:05.065589 | orchestrator | Wednesday 14 May 2025 14:50:38 +0000 (0:00:01.786) 0:01:53.633 ********* 2025-05-14 14:53:05.065599 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.065608 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065618 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065628 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065637 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065646 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065661 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065671 | orchestrator | 2025-05-14 14:53:05.065681 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-14 14:53:05.065691 | orchestrator | Wednesday 14 May 2025 14:50:39 +0000 (0:00:01.065) 0:01:54.699 ********* 2025-05-14 14:53:05.065700 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.065709 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065719 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065728 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065738 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065747 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065757 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065766 | orchestrator | 2025-05-14 14:53:05.065776 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-14 14:53:05.065791 | orchestrator | Wednesday 14 May 2025 14:50:40 +0000 (0:00:00.851) 0:01:55.550 ********* 2025-05-14 14:53:05.065800 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065810 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065820 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065828 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065836 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065844 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.065851 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065859 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.065867 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065875 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065883 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065891 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.065898 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 14:53:05.065906 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.065914 | orchestrator | 2025-05-14 14:53:05.065922 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-14 14:53:05.065930 | orchestrator | Wednesday 14 May 2025 14:50:43 +0000 (0:00:02.459) 0:01:58.009 ********* 2025-05-14 14:53:05.065937 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.065945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:05.065953 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.065961 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:05.065968 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.065976 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:05.065984 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.065992 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:05.066000 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.066008 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:05.066039 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.066050 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:05.066058 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 14:53:05.066066 | orchestrator | skipping: [testbed-manager] 2025-05-14 14:53:05.066074 | orchestrator | 2025-05-14 14:53:05.066081 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-14 14:53:05.066089 | orchestrator | Wednesday 14 May 2025 14:50:46 +0000 (0:00:03.533) 0:02:01.543 ********* 2025-05-14 14:53:05.066098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 14:53:05.066176 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 14:53:05.066185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066330 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 14:53:05.066372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066422 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 14:53:05.066608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066619 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066705 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066714 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.066723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 14:53:05.066833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 14:53:05.066846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 14:53:05.066855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.066880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.066925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 14:53:05.066942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 14:53:05.066963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 14:53:05.066971 | orchestrator | 2025-05-14 14:53:05.066979 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-14 14:53:05.066987 | orchestrator | Wednesday 14 May 2025 14:50:52 +0000 (0:00:05.592) 0:02:07.135 ********* 2025-05-14 14:53:05.066995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 14:53:05.067003 | orchestrator | 2025-05-14 14:53:05.067011 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067022 | orchestrator | Wednesday 14 May 2025 14:50:54 +0000 (0:00:02.421) 0:02:09.556 ********* 2025-05-14 14:53:05.067030 | orchestrator | 2025-05-14 14:53:05.067038 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067045 | orchestrator | Wednesday 14 May 2025 14:50:54 +0000 (0:00:00.052) 0:02:09.608 ********* 2025-05-14 14:53:05.067054 | orchestrator | 2025-05-14 14:53:05.067068 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067084 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.159) 0:02:09.768 ********* 2025-05-14 14:53:05.067104 | orchestrator | 2025-05-14 14:53:05.067117 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067130 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.049) 0:02:09.817 ********* 2025-05-14 14:53:05.067142 | orchestrator | 2025-05-14 14:53:05.067156 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067171 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.047) 0:02:09.865 ********* 2025-05-14 14:53:05.067180 | orchestrator | 2025-05-14 14:53:05.067188 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067195 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.047) 0:02:09.912 ********* 2025-05-14 14:53:05.067203 | orchestrator | 2025-05-14 14:53:05.067211 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 14:53:05.067219 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.185) 0:02:10.097 ********* 2025-05-14 14:53:05.067227 | orchestrator | 2025-05-14 14:53:05.067234 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-14 14:53:05.067242 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.062) 0:02:10.160 ********* 2025-05-14 14:53:05.067250 | orchestrator | changed: [testbed-manager] 2025-05-14 14:53:05.067272 | orchestrator | 2025-05-14 14:53:05.067280 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-14 14:53:05.067289 | orchestrator | Wednesday 14 May 2025 14:51:12 +0000 (0:00:16.621) 0:02:26.781 ********* 2025-05-14 14:53:05.067297 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:05.067305 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.067319 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.067327 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.067335 | orchestrator | changed: [testbed-manager] 2025-05-14 14:53:05.067343 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:05.067351 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:05.067359 | orchestrator | 2025-05-14 14:53:05.067374 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-14 14:53:05.067382 | orchestrator | Wednesday 14 May 2025 14:51:34 +0000 (0:00:22.022) 0:02:48.803 ********* 2025-05-14 14:53:05.067390 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.067398 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.067405 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.067413 | orchestrator | 2025-05-14 14:53:05.067421 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-14 14:53:05.067429 | orchestrator | Wednesday 14 May 2025 14:51:44 +0000 (0:00:10.137) 0:02:58.941 ********* 2025-05-14 14:53:05.067437 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.067444 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.067452 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.067460 | orchestrator | 2025-05-14 14:53:05.067468 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-14 14:53:05.067475 | orchestrator | Wednesday 14 May 2025 14:51:57 +0000 (0:00:12.977) 0:03:11.918 ********* 2025-05-14 14:53:05.067483 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.067491 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.067498 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:05.067506 | orchestrator | changed: [testbed-manager] 2025-05-14 14:53:05.067514 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.067521 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:05.067529 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:05.067536 | orchestrator | 2025-05-14 14:53:05.067544 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-14 14:53:05.067552 | orchestrator | Wednesday 14 May 2025 14:52:17 +0000 (0:00:20.647) 0:03:32.566 ********* 2025-05-14 14:53:05.067560 | orchestrator | changed: [testbed-manager] 2025-05-14 14:53:05.067568 | orchestrator | 2025-05-14 14:53:05.067575 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-14 14:53:05.067583 | orchestrator | Wednesday 14 May 2025 14:52:27 +0000 (0:00:09.970) 0:03:42.537 ********* 2025-05-14 14:53:05.067591 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:05.067599 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:05.067606 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:05.067614 | orchestrator | 2025-05-14 14:53:05.067622 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-14 14:53:05.067630 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:12.252) 0:03:54.790 ********* 2025-05-14 14:53:05.067638 | orchestrator | changed: [testbed-manager] 2025-05-14 14:53:05.067645 | orchestrator | 2025-05-14 14:53:05.067653 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-14 14:53:05.067661 | orchestrator | Wednesday 14 May 2025 14:52:49 +0000 (0:00:09.265) 0:04:04.055 ********* 2025-05-14 14:53:05.067669 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:05.067676 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:05.067684 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:05.067691 | orchestrator | 2025-05-14 14:53:05.067699 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:53:05.067707 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 14:53:05.067716 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 14:53:05.067728 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 14:53:05.067736 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 14:53:05.067744 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 14:53:05.067756 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 14:53:05.067764 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 14:53:05.067772 | orchestrator | 2025-05-14 14:53:05.067780 | orchestrator | 2025-05-14 14:53:05.067788 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:53:05.067796 | orchestrator | Wednesday 14 May 2025 14:53:01 +0000 (0:00:12.537) 0:04:16.593 ********* 2025-05-14 14:53:05.067804 | orchestrator | =============================================================================== 2025-05-14 14:53:05.067811 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.70s 2025-05-14 14:53:05.067819 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.02s 2025-05-14 14:53:05.067827 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.65s 2025-05-14 14:53:05.067835 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.62s 2025-05-14 14:53:05.067843 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.52s 2025-05-14 14:53:05.067850 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.98s 2025-05-14 14:53:05.067862 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.54s 2025-05-14 14:53:05.067870 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.25s 2025-05-14 14:53:05.067878 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.14s 2025-05-14 14:53:05.067885 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.97s 2025-05-14 14:53:05.067893 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.27s 2025-05-14 14:53:05.067901 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 7.04s 2025-05-14 14:53:05.067909 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.90s 2025-05-14 14:53:05.067917 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.96s 2025-05-14 14:53:05.067924 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.59s 2025-05-14 14:53:05.067932 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.05s 2025-05-14 14:53:05.067940 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.50s 2025-05-14 14:53:05.067948 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.79s 2025-05-14 14:53:05.067956 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.53s 2025-05-14 14:53:05.067964 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.40s 2025-05-14 14:53:05.067972 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:05.067980 | orchestrator | 2025-05-14 14:53:05 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:05.067988 | orchestrator | 2025-05-14 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:08.086658 | orchestrator | 2025-05-14 14:53:08 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:08.089420 | orchestrator | 2025-05-14 14:53:08 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:08.089433 | orchestrator | 2025-05-14 14:53:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:08.089437 | orchestrator | 2025-05-14 14:53:08 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:08.089454 | orchestrator | 2025-05-14 14:53:08 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:08.089458 | orchestrator | 2025-05-14 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:11.121205 | orchestrator | 2025-05-14 14:53:11 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:11.124303 | orchestrator | 2025-05-14 14:53:11 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:11.124341 | orchestrator | 2025-05-14 14:53:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:11.124354 | orchestrator | 2025-05-14 14:53:11 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:11.124365 | orchestrator | 2025-05-14 14:53:11 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:11.124376 | orchestrator | 2025-05-14 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:14.153498 | orchestrator | 2025-05-14 14:53:14 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:14.155100 | orchestrator | 2025-05-14 14:53:14 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:14.155512 | orchestrator | 2025-05-14 14:53:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:14.155936 | orchestrator | 2025-05-14 14:53:14 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:14.156443 | orchestrator | 2025-05-14 14:53:14 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:14.157347 | orchestrator | 2025-05-14 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:17.188776 | orchestrator | 2025-05-14 14:53:17 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:17.191762 | orchestrator | 2025-05-14 14:53:17 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:17.193943 | orchestrator | 2025-05-14 14:53:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:17.195154 | orchestrator | 2025-05-14 14:53:17 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:17.196490 | orchestrator | 2025-05-14 14:53:17 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:17.196840 | orchestrator | 2025-05-14 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:20.240225 | orchestrator | 2025-05-14 14:53:20 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:20.243960 | orchestrator | 2025-05-14 14:53:20 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:20.245768 | orchestrator | 2025-05-14 14:53:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:20.247220 | orchestrator | 2025-05-14 14:53:20 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:20.248272 | orchestrator | 2025-05-14 14:53:20 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:20.248298 | orchestrator | 2025-05-14 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:23.297595 | orchestrator | 2025-05-14 14:53:23 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:23.299527 | orchestrator | 2025-05-14 14:53:23 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:23.307145 | orchestrator | 2025-05-14 14:53:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:23.309525 | orchestrator | 2025-05-14 14:53:23 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:23.311744 | orchestrator | 2025-05-14 14:53:23 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state STARTED 2025-05-14 14:53:23.312094 | orchestrator | 2025-05-14 14:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:26.352367 | orchestrator | 2025-05-14 14:53:26 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:26.354537 | orchestrator | 2025-05-14 14:53:26 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:26.356980 | orchestrator | 2025-05-14 14:53:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:26.359019 | orchestrator | 2025-05-14 14:53:26 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:26.361525 | orchestrator | 2025-05-14 14:53:26 | INFO  | Task 58cdff0a-32ea-4ac8-85c3-b8ed012477cc is in state SUCCESS 2025-05-14 14:53:26.361566 | orchestrator | 2025-05-14 14:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:26.363324 | orchestrator | 2025-05-14 14:53:26.363356 | orchestrator | 2025-05-14 14:53:26.363368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:53:26.363380 | orchestrator | 2025-05-14 14:53:26.363391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:53:26.363404 | orchestrator | Wednesday 14 May 2025 14:50:10 +0000 (0:00:00.363) 0:00:00.363 ********* 2025-05-14 14:53:26.363415 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:53:26.363428 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:53:26.363439 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:53:26.363450 | orchestrator | 2025-05-14 14:53:26.363480 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:53:26.363491 | orchestrator | Wednesday 14 May 2025 14:50:10 +0000 (0:00:00.479) 0:00:00.842 ********* 2025-05-14 14:53:26.363507 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-14 14:53:26.363526 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-14 14:53:26.363546 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-14 14:53:26.363564 | orchestrator | 2025-05-14 14:53:26.363582 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-14 14:53:26.363601 | orchestrator | 2025-05-14 14:53:26.363618 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 14:53:26.363633 | orchestrator | Wednesday 14 May 2025 14:50:10 +0000 (0:00:00.345) 0:00:01.187 ********* 2025-05-14 14:53:26.363650 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:53:26.363671 | orchestrator | 2025-05-14 14:53:26.363690 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-14 14:53:26.363709 | orchestrator | Wednesday 14 May 2025 14:50:11 +0000 (0:00:00.818) 0:00:02.006 ********* 2025-05-14 14:53:26.363721 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-14 14:53:26.363732 | orchestrator | 2025-05-14 14:53:26.363743 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-14 14:53:26.363754 | orchestrator | Wednesday 14 May 2025 14:50:15 +0000 (0:00:03.599) 0:00:05.606 ********* 2025-05-14 14:53:26.363765 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-14 14:53:26.363777 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-14 14:53:26.363788 | orchestrator | 2025-05-14 14:53:26.363799 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-14 14:53:26.363809 | orchestrator | Wednesday 14 May 2025 14:50:22 +0000 (0:00:07.068) 0:00:12.674 ********* 2025-05-14 14:53:26.363844 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:53:26.363856 | orchestrator | 2025-05-14 14:53:26.363867 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-14 14:53:26.363878 | orchestrator | Wednesday 14 May 2025 14:50:25 +0000 (0:00:03.462) 0:00:16.137 ********* 2025-05-14 14:53:26.363891 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:53:26.363902 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-14 14:53:26.363915 | orchestrator | 2025-05-14 14:53:26.363928 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-14 14:53:26.363940 | orchestrator | Wednesday 14 May 2025 14:50:29 +0000 (0:00:03.890) 0:00:20.028 ********* 2025-05-14 14:53:26.363952 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:53:26.363964 | orchestrator | 2025-05-14 14:53:26.363976 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-14 14:53:26.363988 | orchestrator | Wednesday 14 May 2025 14:50:33 +0000 (0:00:03.568) 0:00:23.597 ********* 2025-05-14 14:53:26.364000 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-14 14:53:26.364012 | orchestrator | 2025-05-14 14:53:26.364024 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-14 14:53:26.364036 | orchestrator | Wednesday 14 May 2025 14:50:37 +0000 (0:00:04.225) 0:00:27.822 ********* 2025-05-14 14:53:26.364075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.364125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.364177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.364224 | orchestrator | 2025-05-14 14:53:26.364270 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 14:53:26.364283 | orchestrator | Wednesday 14 May 2025 14:50:42 +0000 (0:00:05.100) 0:00:32.923 ********* 2025-05-14 14:53:26.364295 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:53:26.364306 | orchestrator | 2025-05-14 14:53:26.364317 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-14 14:53:26.364327 | orchestrator | Wednesday 14 May 2025 14:50:43 +0000 (0:00:00.459) 0:00:33.383 ********* 2025-05-14 14:53:26.364338 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:26.364349 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.364360 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:26.364370 | orchestrator | 2025-05-14 14:53:26.364381 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-14 14:53:26.364392 | orchestrator | Wednesday 14 May 2025 14:50:51 +0000 (0:00:08.152) 0:00:41.535 ********* 2025-05-14 14:53:26.364403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364436 | orchestrator | 2025-05-14 14:53:26.364446 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-14 14:53:26.364457 | orchestrator | Wednesday 14 May 2025 14:50:52 +0000 (0:00:01.583) 0:00:43.118 ********* 2025-05-14 14:53:26.364468 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364479 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:26.364501 | orchestrator | 2025-05-14 14:53:26.364512 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-14 14:53:26.364523 | orchestrator | Wednesday 14 May 2025 14:50:54 +0000 (0:00:01.099) 0:00:44.218 ********* 2025-05-14 14:53:26.364534 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:53:26.364544 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:53:26.364555 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:53:26.364566 | orchestrator | 2025-05-14 14:53:26.364577 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-14 14:53:26.364588 | orchestrator | Wednesday 14 May 2025 14:50:54 +0000 (0:00:00.716) 0:00:44.934 ********* 2025-05-14 14:53:26.364598 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.364609 | orchestrator | 2025-05-14 14:53:26.364620 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-14 14:53:26.364631 | orchestrator | Wednesday 14 May 2025 14:50:54 +0000 (0:00:00.118) 0:00:45.052 ********* 2025-05-14 14:53:26.364641 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.364653 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.364663 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.364674 | orchestrator | 2025-05-14 14:53:26.364685 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 14:53:26.364696 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.315) 0:00:45.368 ********* 2025-05-14 14:53:26.364706 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:53:26.364717 | orchestrator | 2025-05-14 14:53:26.364736 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-14 14:53:26.364747 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:00.654) 0:00:46.023 ********* 2025-05-14 14:53:26.364774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.364836 | orchestrator | 2025-05-14 14:53:26.364847 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-14 14:53:26.364858 | orchestrator | Wednesday 14 May 2025 14:50:59 +0000 (0:00:04.118) 0:00:50.141 ********* 2025-05-14 14:53:26.364870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.364883 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.364914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.364934 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.364946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.364958 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.364969 | orchestrator | 2025-05-14 14:53:26.364980 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-14 14:53:26.364991 | orchestrator | Wednesday 14 May 2025 14:51:02 +0000 (0:00:02.626) 0:00:52.767 ********* 2025-05-14 14:53:26.365009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.365029 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.365057 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 14:53:26.365087 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365098 | orchestrator | 2025-05-14 14:53:26.365109 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-14 14:53:26.365120 | orchestrator | Wednesday 14 May 2025 14:51:05 +0000 (0:00:03.087) 0:00:55.854 ********* 2025-05-14 14:53:26.365131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365142 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365153 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365164 | orchestrator | 2025-05-14 14:53:26.365180 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-14 14:53:26.365191 | orchestrator | Wednesday 14 May 2025 14:51:09 +0000 (0:00:04.055) 0:00:59.910 ********* 2025-05-14 14:53:26.365208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.365222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.365277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.365292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.365324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.365338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.365351 | orchestrator | 2025-05-14 14:53:26.365362 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-14 14:53:26.365379 | orchestrator | Wednesday 14 May 2025 14:51:17 +0000 (0:00:07.929) 0:01:07.839 ********* 2025-05-14 14:53:26.365390 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:26.365401 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.365411 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:26.365422 | orchestrator | 2025-05-14 14:53:26.365433 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-14 14:53:26.365444 | orchestrator | Wednesday 14 May 2025 14:51:31 +0000 (0:00:14.303) 0:01:22.142 ********* 2025-05-14 14:53:26.365455 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365465 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365487 | orchestrator | 2025-05-14 14:53:26.365497 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-14 14:53:26.365508 | orchestrator | Wednesday 14 May 2025 14:51:40 +0000 (0:00:08.660) 0:01:30.802 ********* 2025-05-14 14:53:26.365519 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365530 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365540 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365551 | orchestrator | 2025-05-14 14:53:26.365562 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-14 14:53:26.365572 | orchestrator | Wednesday 14 May 2025 14:51:48 +0000 (0:00:08.259) 0:01:39.062 ********* 2025-05-14 14:53:26.365583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365594 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365604 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365615 | orchestrator | 2025-05-14 14:53:26.365626 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-14 14:53:26.365637 | orchestrator | Wednesday 14 May 2025 14:51:56 +0000 (0:00:08.004) 0:01:47.066 ********* 2025-05-14 14:53:26.365647 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365664 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365675 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365686 | orchestrator | 2025-05-14 14:53:26.365697 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-14 14:53:26.365708 | orchestrator | Wednesday 14 May 2025 14:52:10 +0000 (0:00:13.797) 0:02:00.864 ********* 2025-05-14 14:53:26.365718 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365729 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365740 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365751 | orchestrator | 2025-05-14 14:53:26.365761 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-14 14:53:26.365777 | orchestrator | Wednesday 14 May 2025 14:52:10 +0000 (0:00:00.242) 0:02:01.107 ********* 2025-05-14 14:53:26.365788 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 14:53:26.365799 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.365810 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 14:53:26.365821 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 14:53:26.365832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.365842 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.365853 | orchestrator | 2025-05-14 14:53:26.365868 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-14 14:53:26.365887 | orchestrator | Wednesday 14 May 2025 14:52:15 +0000 (0:00:05.061) 0:02:06.168 ********* 2025-05-14 14:53:26.365906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.365959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.365983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.366093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.366123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 14:53:26.366145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 14:53:26.366157 | orchestrator | 2025-05-14 14:53:26.366168 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 14:53:26.366179 | orchestrator | Wednesday 14 May 2025 14:52:19 +0000 (0:00:03.542) 0:02:09.710 ********* 2025-05-14 14:53:26.366190 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:26.366201 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:26.366211 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:26.366222 | orchestrator | 2025-05-14 14:53:26.366283 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-14 14:53:26.366297 | orchestrator | Wednesday 14 May 2025 14:52:19 +0000 (0:00:00.296) 0:02:10.007 ********* 2025-05-14 14:53:26.366308 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366318 | orchestrator | 2025-05-14 14:53:26.366329 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-14 14:53:26.366340 | orchestrator | Wednesday 14 May 2025 14:52:22 +0000 (0:00:02.259) 0:02:12.266 ********* 2025-05-14 14:53:26.366351 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366361 | orchestrator | 2025-05-14 14:53:26.366377 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-14 14:53:26.366388 | orchestrator | Wednesday 14 May 2025 14:52:24 +0000 (0:00:02.426) 0:02:14.693 ********* 2025-05-14 14:53:26.366399 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366410 | orchestrator | 2025-05-14 14:53:26.366421 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-14 14:53:26.366432 | orchestrator | Wednesday 14 May 2025 14:52:26 +0000 (0:00:02.232) 0:02:16.925 ********* 2025-05-14 14:53:26.366451 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366462 | orchestrator | 2025-05-14 14:53:26.366473 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-14 14:53:26.366484 | orchestrator | Wednesday 14 May 2025 14:52:51 +0000 (0:00:24.974) 0:02:41.900 ********* 2025-05-14 14:53:26.366494 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366505 | orchestrator | 2025-05-14 14:53:26.366516 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 14:53:26.366527 | orchestrator | Wednesday 14 May 2025 14:52:54 +0000 (0:00:02.310) 0:02:44.211 ********* 2025-05-14 14:53:26.366538 | orchestrator | 2025-05-14 14:53:26.366548 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 14:53:26.366559 | orchestrator | Wednesday 14 May 2025 14:52:54 +0000 (0:00:00.057) 0:02:44.268 ********* 2025-05-14 14:53:26.366570 | orchestrator | 2025-05-14 14:53:26.366581 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 14:53:26.366591 | orchestrator | Wednesday 14 May 2025 14:52:54 +0000 (0:00:00.057) 0:02:44.326 ********* 2025-05-14 14:53:26.366602 | orchestrator | 2025-05-14 14:53:26.366613 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-14 14:53:26.366623 | orchestrator | Wednesday 14 May 2025 14:52:54 +0000 (0:00:00.259) 0:02:44.585 ********* 2025-05-14 14:53:26.366634 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:26.366645 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:26.366656 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:26.366666 | orchestrator | 2025-05-14 14:53:26.366677 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:53:26.366689 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-14 14:53:26.366702 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 14:53:26.366713 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 14:53:26.366723 | orchestrator | 2025-05-14 14:53:26.366734 | orchestrator | 2025-05-14 14:53:26.366745 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:53:26.366756 | orchestrator | Wednesday 14 May 2025 14:53:25 +0000 (0:00:31.306) 0:03:15.891 ********* 2025-05-14 14:53:26.366766 | orchestrator | =============================================================================== 2025-05-14 14:53:26.366777 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.31s 2025-05-14 14:53:26.366788 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.97s 2025-05-14 14:53:26.366799 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 14.30s 2025-05-14 14:53:26.366810 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 13.80s 2025-05-14 14:53:26.366821 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 8.66s 2025-05-14 14:53:26.366832 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.26s 2025-05-14 14:53:26.366842 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 8.15s 2025-05-14 14:53:26.366853 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 8.00s 2025-05-14 14:53:26.366864 | orchestrator | glance : Copying over config.json files for services -------------------- 7.93s 2025-05-14 14:53:26.366874 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.07s 2025-05-14 14:53:26.366885 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.10s 2025-05-14 14:53:26.366896 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.06s 2025-05-14 14:53:26.366906 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.23s 2025-05-14 14:53:26.366923 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.12s 2025-05-14 14:53:26.366934 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.06s 2025-05-14 14:53:26.366945 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.89s 2025-05-14 14:53:26.366956 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.60s 2025-05-14 14:53:26.366966 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.57s 2025-05-14 14:53:26.366978 | orchestrator | glance : Check glance containers ---------------------------------------- 3.54s 2025-05-14 14:53:26.366994 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.46s 2025-05-14 14:53:29.418746 | orchestrator | 2025-05-14 14:53:29 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:29.419090 | orchestrator | 2025-05-14 14:53:29 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:29.419611 | orchestrator | 2025-05-14 14:53:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:29.420470 | orchestrator | 2025-05-14 14:53:29 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:29.422491 | orchestrator | 2025-05-14 14:53:29 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:29.422927 | orchestrator | 2025-05-14 14:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:32.475752 | orchestrator | 2025-05-14 14:53:32 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:32.476987 | orchestrator | 2025-05-14 14:53:32 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:32.480985 | orchestrator | 2025-05-14 14:53:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:32.482552 | orchestrator | 2025-05-14 14:53:32 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:32.485019 | orchestrator | 2025-05-14 14:53:32 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:32.485221 | orchestrator | 2025-05-14 14:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:35.534000 | orchestrator | 2025-05-14 14:53:35 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:35.536102 | orchestrator | 2025-05-14 14:53:35 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:35.537091 | orchestrator | 2025-05-14 14:53:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:35.539586 | orchestrator | 2025-05-14 14:53:35 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:35.541626 | orchestrator | 2025-05-14 14:53:35 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:35.541843 | orchestrator | 2025-05-14 14:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:38.602078 | orchestrator | 2025-05-14 14:53:38 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:38.603452 | orchestrator | 2025-05-14 14:53:38 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:38.608935 | orchestrator | 2025-05-14 14:53:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:38.609024 | orchestrator | 2025-05-14 14:53:38 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:38.609038 | orchestrator | 2025-05-14 14:53:38 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:38.609079 | orchestrator | 2025-05-14 14:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:41.675603 | orchestrator | 2025-05-14 14:53:41 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:41.675710 | orchestrator | 2025-05-14 14:53:41 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:41.676399 | orchestrator | 2025-05-14 14:53:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:41.677369 | orchestrator | 2025-05-14 14:53:41 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:41.677897 | orchestrator | 2025-05-14 14:53:41 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:41.677920 | orchestrator | 2025-05-14 14:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:44.715523 | orchestrator | 2025-05-14 14:53:44 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:44.716057 | orchestrator | 2025-05-14 14:53:44 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:44.717004 | orchestrator | 2025-05-14 14:53:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:44.718444 | orchestrator | 2025-05-14 14:53:44 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:44.720403 | orchestrator | 2025-05-14 14:53:44 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:44.720500 | orchestrator | 2025-05-14 14:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:47.770590 | orchestrator | 2025-05-14 14:53:47 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:47.773524 | orchestrator | 2025-05-14 14:53:47 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state STARTED 2025-05-14 14:53:47.775635 | orchestrator | 2025-05-14 14:53:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:47.777801 | orchestrator | 2025-05-14 14:53:47 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:47.779544 | orchestrator | 2025-05-14 14:53:47 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:47.779690 | orchestrator | 2025-05-14 14:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:50.842385 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:50.843678 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task e49292e7-87e1-49ed-bb27-c5d62a819038 is in state SUCCESS 2025-05-14 14:53:50.845533 | orchestrator | 2025-05-14 14:53:50.845574 | orchestrator | 2025-05-14 14:53:50.845586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:53:50.845598 | orchestrator | 2025-05-14 14:53:50.845608 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:53:50.845619 | orchestrator | Wednesday 14 May 2025 14:50:37 +0000 (0:00:00.290) 0:00:00.290 ********* 2025-05-14 14:53:50.845629 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:53:50.845640 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:53:50.845649 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:53:50.845754 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:53:50.845765 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:53:50.845833 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:53:50.845845 | orchestrator | 2025-05-14 14:53:50.845855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:53:50.845865 | orchestrator | Wednesday 14 May 2025 14:50:38 +0000 (0:00:00.651) 0:00:00.941 ********* 2025-05-14 14:53:50.846184 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-14 14:53:50.846242 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-14 14:53:50.846253 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-14 14:53:50.846263 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-14 14:53:50.846272 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-14 14:53:50.846282 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-14 14:53:50.846421 | orchestrator | 2025-05-14 14:53:50.846437 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-14 14:53:50.846447 | orchestrator | 2025-05-14 14:53:50.846456 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 14:53:50.846466 | orchestrator | Wednesday 14 May 2025 14:50:39 +0000 (0:00:01.080) 0:00:02.021 ********* 2025-05-14 14:53:50.846546 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:53:50.846561 | orchestrator | 2025-05-14 14:53:50.846819 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-14 14:53:50.846829 | orchestrator | Wednesday 14 May 2025 14:50:41 +0000 (0:00:01.491) 0:00:03.513 ********* 2025-05-14 14:53:50.846840 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-14 14:53:50.846850 | orchestrator | 2025-05-14 14:53:50.846859 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-14 14:53:50.846869 | orchestrator | Wednesday 14 May 2025 14:50:44 +0000 (0:00:03.735) 0:00:07.248 ********* 2025-05-14 14:53:50.846879 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-14 14:53:50.846890 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-14 14:53:50.846900 | orchestrator | 2025-05-14 14:53:50.846910 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-14 14:53:50.846919 | orchestrator | Wednesday 14 May 2025 14:50:52 +0000 (0:00:07.186) 0:00:14.435 ********* 2025-05-14 14:53:50.846929 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:53:50.846939 | orchestrator | 2025-05-14 14:53:50.846948 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-14 14:53:50.846957 | orchestrator | Wednesday 14 May 2025 14:50:55 +0000 (0:00:03.678) 0:00:18.113 ********* 2025-05-14 14:53:50.846967 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:53:50.846977 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-14 14:53:50.846987 | orchestrator | 2025-05-14 14:53:50.846997 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-14 14:53:50.847006 | orchestrator | Wednesday 14 May 2025 14:50:59 +0000 (0:00:03.954) 0:00:22.067 ********* 2025-05-14 14:53:50.847016 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:53:50.847025 | orchestrator | 2025-05-14 14:53:50.847035 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-14 14:53:50.847044 | orchestrator | Wednesday 14 May 2025 14:51:02 +0000 (0:00:03.254) 0:00:25.321 ********* 2025-05-14 14:53:50.847054 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-14 14:53:50.847063 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-14 14:53:50.847100 | orchestrator | 2025-05-14 14:53:50.847109 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-14 14:53:50.847511 | orchestrator | Wednesday 14 May 2025 14:51:12 +0000 (0:00:09.081) 0:00:34.403 ********* 2025-05-14 14:53:50.847633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.847667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.847680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.847692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.847768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.847780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.847816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.847871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.847892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.847926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.847973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.847983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.847993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.848021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.848056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.848067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.848077 | orchestrator | 2025-05-14 14:53:50.848088 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 14:53:50.848097 | orchestrator | Wednesday 14 May 2025 14:51:16 +0000 (0:00:04.001) 0:00:38.405 ********* 2025-05-14 14:53:50.848107 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.848117 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.848127 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.848136 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:53:50.848146 | orchestrator | 2025-05-14 14:53:50.848156 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-14 14:53:50.848165 | orchestrator | Wednesday 14 May 2025 14:51:18 +0000 (0:00:02.027) 0:00:40.432 ********* 2025-05-14 14:53:50.848175 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-14 14:53:50.848184 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-14 14:53:50.848194 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-14 14:53:50.848469 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-14 14:53:50.848503 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-14 14:53:50.848517 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-14 14:53:50.848529 | orchestrator | 2025-05-14 14:53:50.848542 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-14 14:53:50.848590 | orchestrator | Wednesday 14 May 2025 14:51:23 +0000 (0:00:05.474) 0:00:45.906 ********* 2025-05-14 14:53:50.848608 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848641 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848777 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848794 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848806 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848827 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 14:53:50.848854 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.848900 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.848914 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.848926 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.848947 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.848991 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 14:53:50.849004 | orchestrator | 2025-05-14 14:53:50.849016 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-14 14:53:50.849027 | orchestrator | Wednesday 14 May 2025 14:51:28 +0000 (0:00:05.320) 0:00:51.227 ********* 2025-05-14 14:53:50.849039 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:50.849050 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:50.849061 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 14:53:50.849072 | orchestrator | 2025-05-14 14:53:50.849089 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-14 14:53:50.849106 | orchestrator | Wednesday 14 May 2025 14:51:30 +0000 (0:00:02.068) 0:00:53.295 ********* 2025-05-14 14:53:50.849118 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-14 14:53:50.849130 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-14 14:53:50.849141 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-14 14:53:50.849151 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 14:53:50.849162 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 14:53:50.849174 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 14:53:50.849185 | orchestrator | 2025-05-14 14:53:50.849250 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-14 14:53:50.849266 | orchestrator | Wednesday 14 May 2025 14:51:33 +0000 (0:00:02.744) 0:00:56.040 ********* 2025-05-14 14:53:50.849277 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-14 14:53:50.849298 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-14 14:53:50.849309 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-14 14:53:50.849319 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-14 14:53:50.849330 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-14 14:53:50.849341 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-14 14:53:50.849352 | orchestrator | 2025-05-14 14:53:50.849363 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-14 14:53:50.849374 | orchestrator | Wednesday 14 May 2025 14:51:34 +0000 (0:00:01.209) 0:00:57.249 ********* 2025-05-14 14:53:50.849385 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.849396 | orchestrator | 2025-05-14 14:53:50.849407 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-14 14:53:50.849418 | orchestrator | Wednesday 14 May 2025 14:51:35 +0000 (0:00:00.225) 0:00:57.475 ********* 2025-05-14 14:53:50.849429 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.849440 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.849451 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.849461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.849472 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.849483 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.849494 | orchestrator | 2025-05-14 14:53:50.849505 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 14:53:50.849516 | orchestrator | Wednesday 14 May 2025 14:51:36 +0000 (0:00:01.298) 0:00:58.773 ********* 2025-05-14 14:53:50.849529 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:53:50.849541 | orchestrator | 2025-05-14 14:53:50.849553 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-14 14:53:50.849563 | orchestrator | Wednesday 14 May 2025 14:51:38 +0000 (0:00:02.420) 0:01:01.194 ********* 2025-05-14 14:53:50.849581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.849638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.849662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.849694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.849866 | orchestrator | 2025-05-14 14:53:50.849877 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-14 14:53:50.849889 | orchestrator | Wednesday 14 May 2025 14:51:42 +0000 (0:00:03.423) 0:01:04.618 ********* 2025-05-14 14:53:50.849929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.849950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.849964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.849976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.849987 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.849999 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.850058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850130 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.850142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850165 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.850176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850233 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.850273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850305 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.850316 | orchestrator | 2025-05-14 14:53:50.850327 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-14 14:53:50.850338 | orchestrator | Wednesday 14 May 2025 14:51:43 +0000 (0:00:01.699) 0:01:06.317 ********* 2025-05-14 14:53:50.850350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.850452 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.850463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850487 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.850498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850529 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.850571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850596 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.850607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850631 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.850642 | orchestrator | 2025-05-14 14:53:50.850653 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-14 14:53:50.850664 | orchestrator | Wednesday 14 May 2025 14:51:46 +0000 (0:00:02.942) 0:01:09.260 ********* 2025-05-14 14:53:50.850680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.850727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.850826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.850838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.850862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.850935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.850949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.850960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.850972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.850994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851112 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851136 | orchestrator | 2025-05-14 14:53:50.851147 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-14 14:53:50.851263 | orchestrator | Wednesday 14 May 2025 14:51:50 +0000 (0:00:03.606) 0:01:12.866 ********* 2025-05-14 14:53:50.851277 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 14:53:50.851289 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.851300 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 14:53:50.851311 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.851322 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 14:53:50.851333 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.851344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 14:53:50.851355 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 14:53:50.851366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 14:53:50.851376 | orchestrator | 2025-05-14 14:53:50.851387 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-14 14:53:50.851398 | orchestrator | Wednesday 14 May 2025 14:51:53 +0000 (0:00:03.042) 0:01:15.909 ********* 2025-05-14 14:53:50.851422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.851434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.851491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.851515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.851573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.851585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.851604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.851967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.851995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.852006 | orchestrator | 2025-05-14 14:53:50.852023 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-14 14:53:50.852035 | orchestrator | Wednesday 14 May 2025 14:52:07 +0000 (0:00:13.717) 0:01:29.627 ********* 2025-05-14 14:53:50.852046 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.852057 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.852068 | orchestrator | [0;36mskipping: [testbed-node-2] 2025-05-14 14:53:50.852079 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:50.852096 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:50.852111 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:50.852122 | orchestrator | 2025-05-14 14:53:50.852133 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-14 14:53:50.852144 | orchestrator | Wednesday 14 May 2025 14:52:11 +0000 (0:00:03.903) 0:01:33.531 ********* 2025-05-14 14:53:50.852155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852248 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.852268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852323 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.852339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852398 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.852410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852478 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.852491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852550 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.852571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852630 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.852643 | orchestrator | 2025-05-14 14:53:50.852655 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-14 14:53:50.852667 | orchestrator | Wednesday 14 May 2025 14:52:13 +0000 (0:00:01.849) 0:01:35.380 ********* 2025-05-14 14:53:50.852679 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.852692 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.852704 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.852716 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.852727 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.852739 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.852751 | orchestrator | 2025-05-14 14:53:50.852763 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-14 14:53:50.852775 | orchestrator | Wednesday 14 May 2025 14:52:14 +0000 (0:00:01.173) 0:01:36.554 ********* 2025-05-14 14:53:50.852799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 14:53:50.852872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.852897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.852909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.852921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.852932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.852954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 14:53:50.852973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.852985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.852997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.853052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.853087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.853099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.853122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:53:50.853140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 14:53:50.853163 | orchestrator | 2025-05-14 14:53:50.853174 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 14:53:50.853185 | orchestrator | Wednesday 14 May 2025 14:52:17 +0000 (0:00:03.071) 0:01:39.626 ********* 2025-05-14 14:53:50.853220 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.853241 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:53:50.853261 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:53:50.853279 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:53:50.853298 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:53:50.853309 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:53:50.853320 | orchestrator | 2025-05-14 14:53:50.853330 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-14 14:53:50.853341 | orchestrator | Wednesday 14 May 2025 14:52:17 +0000 (0:00:00.631) 0:01:40.257 ********* 2025-05-14 14:53:50.853352 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:50.853363 | orchestrator | 2025-05-14 14:53:50.853373 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-14 14:53:50.853384 | orchestrator | Wednesday 14 May 2025 14:52:20 +0000 (0:00:02.497) 0:01:42.755 ********* 2025-05-14 14:53:50.853395 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:50.853405 | orchestrator | 2025-05-14 14:53:50.853416 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-14 14:53:50.853427 | orchestrator | Wednesday 14 May 2025 14:52:22 +0000 (0:00:02.356) 0:01:45.111 ********* 2025-05-14 14:53:50.853445 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:50.853456 | orchestrator | 2025-05-14 14:53:50.853467 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853478 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:17.757) 0:02:02.869 ********* 2025-05-14 14:53:50.853489 | orchestrator | 2025-05-14 14:53:50.853500 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853510 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:00.056) 0:02:02.925 ********* 2025-05-14 14:53:50.853521 | orchestrator | 2025-05-14 14:53:50.853532 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853543 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:00.200) 0:02:03.125 ********* 2025-05-14 14:53:50.853553 | orchestrator | 2025-05-14 14:53:50.853564 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853574 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:00.056) 0:02:03.182 ********* 2025-05-14 14:53:50.853585 | orchestrator | 2025-05-14 14:53:50.853596 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853607 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:00.050) 0:02:03.233 ********* 2025-05-14 14:53:50.853618 | orchestrator | 2025-05-14 14:53:50.853629 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 14:53:50.853639 | orchestrator | Wednesday 14 May 2025 14:52:40 +0000 (0:00:00.051) 0:02:03.284 ********* 2025-05-14 14:53:50.853650 | orchestrator | 2025-05-14 14:53:50.853666 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-14 14:53:50.853677 | orchestrator | Wednesday 14 May 2025 14:52:41 +0000 (0:00:00.219) 0:02:03.504 ********* 2025-05-14 14:53:50.853688 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:50.853698 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:50.853709 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:50.853719 | orchestrator | 2025-05-14 14:53:50.853730 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-14 14:53:50.853740 | orchestrator | Wednesday 14 May 2025 14:52:59 +0000 (0:00:18.164) 0:02:21.668 ********* 2025-05-14 14:53:50.853751 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:53:50.853762 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:53:50.853772 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:53:50.853783 | orchestrator | 2025-05-14 14:53:50.853794 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-14 14:53:50.853812 | orchestrator | Wednesday 14 May 2025 14:53:11 +0000 (0:00:12.058) 0:02:33.726 ********* 2025-05-14 14:53:50.853824 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:50.853834 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:50.853845 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:50.853856 | orchestrator | 2025-05-14 14:53:50.853866 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-14 14:53:50.853877 | orchestrator | Wednesday 14 May 2025 14:53:36 +0000 (0:00:25.136) 0:02:58.863 ********* 2025-05-14 14:53:50.854059 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:53:50.854073 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:53:50.854084 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:53:50.854095 | orchestrator | 2025-05-14 14:53:50.854106 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-14 14:53:50.854117 | orchestrator | Wednesday 14 May 2025 14:53:47 +0000 (0:00:10.888) 0:03:09.751 ********* 2025-05-14 14:53:50.854128 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:53:50.854139 | orchestrator | 2025-05-14 14:53:50.854150 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:53:50.854161 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 14:53:50.854173 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 14:53:50.854193 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 14:53:50.854232 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:53:50.854244 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:53:50.854255 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:53:50.854265 | orchestrator | 2025-05-14 14:53:50.854277 | orchestrator | 2025-05-14 14:53:50.854288 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:53:50.854298 | orchestrator | Wednesday 14 May 2025 14:53:47 +0000 (0:00:00.546) 0:03:10.297 ********* 2025-05-14 14:53:50.854309 | orchestrator | =============================================================================== 2025-05-14 14:53:50.854320 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.14s 2025-05-14 14:53:50.854331 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.16s 2025-05-14 14:53:50.854342 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.76s 2025-05-14 14:53:50.854352 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.72s 2025-05-14 14:53:50.854363 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.06s 2025-05-14 14:53:50.854374 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.89s 2025-05-14 14:53:50.854389 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.08s 2025-05-14 14:53:50.854408 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.19s 2025-05-14 14:53:50.854427 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.47s 2025-05-14 14:53:50.854446 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.32s 2025-05-14 14:53:50.854459 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.00s 2025-05-14 14:53:50.854471 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.95s 2025-05-14 14:53:50.854481 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.90s 2025-05-14 14:53:50.854492 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.74s 2025-05-14 14:53:50.854503 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.68s 2025-05-14 14:53:50.854514 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.61s 2025-05-14 14:53:50.854525 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.42s 2025-05-14 14:53:50.854536 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.25s 2025-05-14 14:53:50.854557 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.07s 2025-05-14 14:53:50.854568 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.04s 2025-05-14 14:53:50.854579 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:50.854590 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:50.854608 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:53:50.857603 | orchestrator | 2025-05-14 14:53:50 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:50.857635 | orchestrator | 2025-05-14 14:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:53.899007 | orchestrator | 2025-05-14 14:53:53 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:53.899320 | orchestrator | 2025-05-14 14:53:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:53.902394 | orchestrator | 2025-05-14 14:53:53 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:53.903367 | orchestrator | 2025-05-14 14:53:53 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:53:53.905047 | orchestrator | 2025-05-14 14:53:53 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:53.905294 | orchestrator | 2025-05-14 14:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:56.942565 | orchestrator | 2025-05-14 14:53:56 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:56.943686 | orchestrator | 2025-05-14 14:53:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:56.944784 | orchestrator | 2025-05-14 14:53:56 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:56.946138 | orchestrator | 2025-05-14 14:53:56 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:53:56.947682 | orchestrator | 2025-05-14 14:53:56 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:56.947705 | orchestrator | 2025-05-14 14:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:53:59.992595 | orchestrator | 2025-05-14 14:53:59 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:53:59.993781 | orchestrator | 2025-05-14 14:53:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:53:59.995492 | orchestrator | 2025-05-14 14:53:59 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:53:59.997342 | orchestrator | 2025-05-14 14:53:59 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:53:59.999311 | orchestrator | 2025-05-14 14:53:59 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:53:59.999347 | orchestrator | 2025-05-14 14:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:03.039693 | orchestrator | 2025-05-14 14:54:03 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:03.041583 | orchestrator | 2025-05-14 14:54:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:03.043381 | orchestrator | 2025-05-14 14:54:03 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:03.045022 | orchestrator | 2025-05-14 14:54:03 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:03.046909 | orchestrator | 2025-05-14 14:54:03 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:03.047030 | orchestrator | 2025-05-14 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:06.080914 | orchestrator | 2025-05-14 14:54:06 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:06.081577 | orchestrator | 2025-05-14 14:54:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:06.082364 | orchestrator | 2025-05-14 14:54:06 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:06.084900 | orchestrator | 2025-05-14 14:54:06 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:06.087545 | orchestrator | 2025-05-14 14:54:06 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:06.087643 | orchestrator | 2025-05-14 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:09.125712 | orchestrator | 2025-05-14 14:54:09 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:09.128638 | orchestrator | 2025-05-14 14:54:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:09.131761 | orchestrator | 2025-05-14 14:54:09 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:09.135839 | orchestrator | 2025-05-14 14:54:09 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:09.139686 | orchestrator | 2025-05-14 14:54:09 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:09.140255 | orchestrator | 2025-05-14 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:12.183668 | orchestrator | 2025-05-14 14:54:12 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:12.185044 | orchestrator | 2025-05-14 14:54:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:12.187311 | orchestrator | 2025-05-14 14:54:12 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:12.189533 | orchestrator | 2025-05-14 14:54:12 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:12.190998 | orchestrator | 2025-05-14 14:54:12 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:12.191033 | orchestrator | 2025-05-14 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:15.244616 | orchestrator | 2025-05-14 14:54:15 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:15.246098 | orchestrator | 2025-05-14 14:54:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:15.247925 | orchestrator | 2025-05-14 14:54:15 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:15.250380 | orchestrator | 2025-05-14 14:54:15 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:15.251956 | orchestrator | 2025-05-14 14:54:15 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:15.252008 | orchestrator | 2025-05-14 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:18.293800 | orchestrator | 2025-05-14 14:54:18 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:18.296478 | orchestrator | 2025-05-14 14:54:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:18.297938 | orchestrator | 2025-05-14 14:54:18 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:18.298873 | orchestrator | 2025-05-14 14:54:18 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:18.300325 | orchestrator | 2025-05-14 14:54:18 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:18.300363 | orchestrator | 2025-05-14 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:21.349450 | orchestrator | 2025-05-14 14:54:21 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:21.349639 | orchestrator | 2025-05-14 14:54:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:21.351673 | orchestrator | 2025-05-14 14:54:21 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:21.355868 | orchestrator | 2025-05-14 14:54:21 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:21.358338 | orchestrator | 2025-05-14 14:54:21 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:21.358468 | orchestrator | 2025-05-14 14:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:24.397210 | orchestrator | 2025-05-14 14:54:24 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:24.398530 | orchestrator | 2025-05-14 14:54:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:24.399657 | orchestrator | 2025-05-14 14:54:24 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:24.399683 | orchestrator | 2025-05-14 14:54:24 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:24.400588 | orchestrator | 2025-05-14 14:54:24 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state STARTED 2025-05-14 14:54:24.402070 | orchestrator | 2025-05-14 14:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:27.446909 | orchestrator | 2025-05-14 14:54:27 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:27.448461 | orchestrator | 2025-05-14 14:54:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:27.450002 | orchestrator | 2025-05-14 14:54:27 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:27.451323 | orchestrator | 2025-05-14 14:54:27 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:27.452252 | orchestrator | 2025-05-14 14:54:27 | INFO  | Task 0194f43c-a9b0-4226-bf9b-1570effeeecb is in state SUCCESS 2025-05-14 14:54:27.452376 | orchestrator | 2025-05-14 14:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:30.499286 | orchestrator | 2025-05-14 14:54:30 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:30.500024 | orchestrator | 2025-05-14 14:54:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:30.501924 | orchestrator | 2025-05-14 14:54:30 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:30.503877 | orchestrator | 2025-05-14 14:54:30 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:30.503909 | orchestrator | 2025-05-14 14:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:33.550998 | orchestrator | 2025-05-14 14:54:33 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:33.553710 | orchestrator | 2025-05-14 14:54:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:33.554640 | orchestrator | 2025-05-14 14:54:33 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:33.556534 | orchestrator | 2025-05-14 14:54:33 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:33.556629 | orchestrator | 2025-05-14 14:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:36.615373 | orchestrator | 2025-05-14 14:54:36 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:36.617563 | orchestrator | 2025-05-14 14:54:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:36.619090 | orchestrator | 2025-05-14 14:54:36 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:36.620695 | orchestrator | 2025-05-14 14:54:36 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:36.620737 | orchestrator | 2025-05-14 14:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:39.670766 | orchestrator | 2025-05-14 14:54:39 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:39.670885 | orchestrator | 2025-05-14 14:54:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:39.671875 | orchestrator | 2025-05-14 14:54:39 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:39.672565 | orchestrator | 2025-05-14 14:54:39 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:39.672593 | orchestrator | 2025-05-14 14:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:42.711571 | orchestrator | 2025-05-14 14:54:42 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:42.711682 | orchestrator | 2025-05-14 14:54:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:42.711698 | orchestrator | 2025-05-14 14:54:42 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:42.712383 | orchestrator | 2025-05-14 14:54:42 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:42.712418 | orchestrator | 2025-05-14 14:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:45.757462 | orchestrator | 2025-05-14 14:54:45 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:45.757947 | orchestrator | 2025-05-14 14:54:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:45.759471 | orchestrator | 2025-05-14 14:54:45 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:45.760726 | orchestrator | 2025-05-14 14:54:45 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:45.760811 | orchestrator | 2025-05-14 14:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:48.798069 | orchestrator | 2025-05-14 14:54:48 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:48.798584 | orchestrator | 2025-05-14 14:54:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:48.799696 | orchestrator | 2025-05-14 14:54:48 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:48.800574 | orchestrator | 2025-05-14 14:54:48 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:48.801245 | orchestrator | 2025-05-14 14:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:51.852234 | orchestrator | 2025-05-14 14:54:51 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:51.853354 | orchestrator | 2025-05-14 14:54:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:51.855927 | orchestrator | 2025-05-14 14:54:51 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:51.858127 | orchestrator | 2025-05-14 14:54:51 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:51.858243 | orchestrator | 2025-05-14 14:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:54.901749 | orchestrator | 2025-05-14 14:54:54 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:54.904311 | orchestrator | 2025-05-14 14:54:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:54.908619 | orchestrator | 2025-05-14 14:54:54 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:54.909442 | orchestrator | 2025-05-14 14:54:54 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:54.909496 | orchestrator | 2025-05-14 14:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:54:57.960318 | orchestrator | 2025-05-14 14:54:57 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:54:57.963020 | orchestrator | 2025-05-14 14:54:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:54:57.967520 | orchestrator | 2025-05-14 14:54:57 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:54:57.968835 | orchestrator | 2025-05-14 14:54:57 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:54:57.969182 | orchestrator | 2025-05-14 14:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:01.020421 | orchestrator | 2025-05-14 14:55:01 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:01.022546 | orchestrator | 2025-05-14 14:55:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:01.023943 | orchestrator | 2025-05-14 14:55:01 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:01.025431 | orchestrator | 2025-05-14 14:55:01 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:01.025468 | orchestrator | 2025-05-14 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:04.072598 | orchestrator | 2025-05-14 14:55:04 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:04.074138 | orchestrator | 2025-05-14 14:55:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:04.075104 | orchestrator | 2025-05-14 14:55:04 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:04.075947 | orchestrator | 2025-05-14 14:55:04 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:04.076267 | orchestrator | 2025-05-14 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:07.127830 | orchestrator | 2025-05-14 14:55:07 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:07.129732 | orchestrator | 2025-05-14 14:55:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:07.132236 | orchestrator | 2025-05-14 14:55:07 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:07.134235 | orchestrator | 2025-05-14 14:55:07 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:07.134294 | orchestrator | 2025-05-14 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:10.184642 | orchestrator | 2025-05-14 14:55:10 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:10.188854 | orchestrator | 2025-05-14 14:55:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:10.190375 | orchestrator | 2025-05-14 14:55:10 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:10.191788 | orchestrator | 2025-05-14 14:55:10 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:10.192018 | orchestrator | 2025-05-14 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:13.243318 | orchestrator | 2025-05-14 14:55:13 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:13.244416 | orchestrator | 2025-05-14 14:55:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:13.247046 | orchestrator | 2025-05-14 14:55:13 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:13.248539 | orchestrator | 2025-05-14 14:55:13 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:13.248570 | orchestrator | 2025-05-14 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:16.303788 | orchestrator | 2025-05-14 14:55:16 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:16.304858 | orchestrator | 2025-05-14 14:55:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:16.306907 | orchestrator | 2025-05-14 14:55:16 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:16.308665 | orchestrator | 2025-05-14 14:55:16 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:16.308705 | orchestrator | 2025-05-14 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:19.363750 | orchestrator | 2025-05-14 14:55:19 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:19.365846 | orchestrator | 2025-05-14 14:55:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:19.367770 | orchestrator | 2025-05-14 14:55:19 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:19.369900 | orchestrator | 2025-05-14 14:55:19 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:19.370116 | orchestrator | 2025-05-14 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:22.422354 | orchestrator | 2025-05-14 14:55:22 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:22.423247 | orchestrator | 2025-05-14 14:55:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:22.424525 | orchestrator | 2025-05-14 14:55:22 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state STARTED 2025-05-14 14:55:22.425893 | orchestrator | 2025-05-14 14:55:22 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:22.425938 | orchestrator | 2025-05-14 14:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:25.481537 | orchestrator | 2025-05-14 14:55:25 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:25.482566 | orchestrator | 2025-05-14 14:55:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:25.483374 | orchestrator | 2025-05-14 14:55:25 | INFO  | Task 9cca566f-99cd-4980-a6c5-4fbf605ce409 is in state SUCCESS 2025-05-14 14:55:25.484920 | orchestrator | 2025-05-14 14:55:25 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:25.484962 | orchestrator | 2025-05-14 14:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:28.529621 | orchestrator | 2025-05-14 14:55:28 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:28.529737 | orchestrator | 2025-05-14 14:55:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:28.530312 | orchestrator | 2025-05-14 14:55:28 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:28.530381 | orchestrator | 2025-05-14 14:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:31.568728 | orchestrator | 2025-05-14 14:55:31 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:31.572055 | orchestrator | 2025-05-14 14:55:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:31.572202 | orchestrator | 2025-05-14 14:55:31 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:31.572225 | orchestrator | 2025-05-14 14:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:34.616932 | orchestrator | 2025-05-14 14:55:34 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:34.618635 | orchestrator | 2025-05-14 14:55:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:34.619026 | orchestrator | 2025-05-14 14:55:34 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:34.619448 | orchestrator | 2025-05-14 14:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:37.671542 | orchestrator | 2025-05-14 14:55:37 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:37.672291 | orchestrator | 2025-05-14 14:55:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:37.673679 | orchestrator | 2025-05-14 14:55:37 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:37.673708 | orchestrator | 2025-05-14 14:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:40.716354 | orchestrator | 2025-05-14 14:55:40 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:40.719430 | orchestrator | 2025-05-14 14:55:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:40.722934 | orchestrator | 2025-05-14 14:55:40 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:40.722972 | orchestrator | 2025-05-14 14:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:43.763166 | orchestrator | 2025-05-14 14:55:43 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:43.763465 | orchestrator | 2025-05-14 14:55:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:43.766523 | orchestrator | 2025-05-14 14:55:43 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:43.766601 | orchestrator | 2025-05-14 14:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:46.816565 | orchestrator | 2025-05-14 14:55:46 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:46.817441 | orchestrator | 2025-05-14 14:55:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:46.819335 | orchestrator | 2025-05-14 14:55:46 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:46.819426 | orchestrator | 2025-05-14 14:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:49.877121 | orchestrator | 2025-05-14 14:55:49 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:49.879358 | orchestrator | 2025-05-14 14:55:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:49.880399 | orchestrator | 2025-05-14 14:55:49 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state STARTED 2025-05-14 14:55:49.880504 | orchestrator | 2025-05-14 14:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:52.939451 | orchestrator | 2025-05-14 14:55:52 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:52.940987 | orchestrator | 2025-05-14 14:55:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:52.944704 | orchestrator | 2025-05-14 14:55:52 | INFO  | Task 580e76f5-6cf2-47a2-9242-5814645a38da is in state SUCCESS 2025-05-14 14:55:52.946582 | orchestrator | 2025-05-14 14:55:52.946625 | orchestrator | 2025-05-14 14:55:52.946636 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:55:52.946680 | orchestrator | 2025-05-14 14:55:52.946689 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:55:52.946699 | orchestrator | Wednesday 14 May 2025 14:53:28 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-05-14 14:55:52.946707 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.946716 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:55:52.946724 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:55:52.946732 | orchestrator | 2025-05-14 14:55:52.946740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:55:52.946748 | orchestrator | Wednesday 14 May 2025 14:53:29 +0000 (0:00:00.349) 0:00:00.629 ********* 2025-05-14 14:55:52.946756 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-14 14:55:52.946779 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-14 14:55:52.946787 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-14 14:55:52.946795 | orchestrator | 2025-05-14 14:55:52.946828 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-14 14:55:52.946837 | orchestrator | 2025-05-14 14:55:52.946845 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 14:55:52.946853 | orchestrator | Wednesday 14 May 2025 14:53:29 +0000 (0:00:00.322) 0:00:00.952 ********* 2025-05-14 14:55:52.946861 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:55:52.946870 | orchestrator | 2025-05-14 14:55:52.946878 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-14 14:55:52.946886 | orchestrator | Wednesday 14 May 2025 14:53:30 +0000 (0:00:00.726) 0:00:01.679 ********* 2025-05-14 14:55:52.946894 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-14 14:55:52.946902 | orchestrator | 2025-05-14 14:55:52.946910 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-14 14:55:52.946918 | orchestrator | Wednesday 14 May 2025 14:53:33 +0000 (0:00:03.272) 0:00:04.951 ********* 2025-05-14 14:55:52.946926 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-14 14:55:52.946934 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-14 14:55:52.946961 | orchestrator | 2025-05-14 14:55:52.946971 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-14 14:55:52.946979 | orchestrator | Wednesday 14 May 2025 14:53:40 +0000 (0:00:07.306) 0:00:12.258 ********* 2025-05-14 14:55:52.946987 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:55:52.946995 | orchestrator | 2025-05-14 14:55:52.947002 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-14 14:55:52.947010 | orchestrator | Wednesday 14 May 2025 14:53:44 +0000 (0:00:03.937) 0:00:16.195 ********* 2025-05-14 14:55:52.947018 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:55:52.947101 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 14:55:52.947111 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 14:55:52.947119 | orchestrator | 2025-05-14 14:55:52.947126 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-14 14:55:52.947205 | orchestrator | Wednesday 14 May 2025 14:53:53 +0000 (0:00:08.657) 0:00:24.853 ********* 2025-05-14 14:55:52.947216 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:55:52.947225 | orchestrator | 2025-05-14 14:55:52.947234 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-14 14:55:52.947243 | orchestrator | Wednesday 14 May 2025 14:53:56 +0000 (0:00:03.520) 0:00:28.374 ********* 2025-05-14 14:55:52.947252 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 14:55:52.947261 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 14:55:52.947275 | orchestrator | 2025-05-14 14:55:52.947288 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-14 14:55:52.947354 | orchestrator | Wednesday 14 May 2025 14:54:04 +0000 (0:00:07.874) 0:00:36.248 ********* 2025-05-14 14:55:52.947370 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-14 14:55:52.947418 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-14 14:55:52.947434 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-14 14:55:52.947443 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-14 14:55:52.947452 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-14 14:55:52.947461 | orchestrator | 2025-05-14 14:55:52.947469 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 14:55:52.947477 | orchestrator | Wednesday 14 May 2025 14:54:21 +0000 (0:00:16.846) 0:00:53.095 ********* 2025-05-14 14:55:52.947490 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:55:52.947504 | orchestrator | 2025-05-14 14:55:52.947519 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-14 14:55:52.947533 | orchestrator | Wednesday 14 May 2025 14:54:22 +0000 (0:00:00.859) 0:00:53.954 ********* 2025-05-14 14:55:52.947565 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-14 14:55:52.947578 | orchestrator | 2025-05-14 14:55:52.947586 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:55:52.947595 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.947605 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.947633 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.947642 | orchestrator | 2025-05-14 14:55:52.947649 | orchestrator | 2025-05-14 14:55:52.947658 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:55:52.947676 | orchestrator | Wednesday 14 May 2025 14:54:25 +0000 (0:00:03.399) 0:00:57.354 ********* 2025-05-14 14:55:52.947684 | orchestrator | =============================================================================== 2025-05-14 14:55:52.947692 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.85s 2025-05-14 14:55:52.947717 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.66s 2025-05-14 14:55:52.947725 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.87s 2025-05-14 14:55:52.947733 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.31s 2025-05-14 14:55:52.947741 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.94s 2025-05-14 14:55:52.947759 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.52s 2025-05-14 14:55:52.947768 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.40s 2025-05-14 14:55:52.947775 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.27s 2025-05-14 14:55:52.947783 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.86s 2025-05-14 14:55:52.947791 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.73s 2025-05-14 14:55:52.947799 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-14 14:55:52.947817 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-05-14 14:55:52.947826 | orchestrator | 2025-05-14 14:55:52.947834 | orchestrator | 2025-05-14 14:55:52.947842 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:55:52.947850 | orchestrator | 2025-05-14 14:55:52.947857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:55:52.947865 | orchestrator | Wednesday 14 May 2025 14:53:05 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-05-14 14:55:52.947873 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.947881 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:55:52.947889 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:55:52.947897 | orchestrator | 2025-05-14 14:55:52.947907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:55:52.947920 | orchestrator | Wednesday 14 May 2025 14:53:05 +0000 (0:00:00.326) 0:00:00.484 ********* 2025-05-14 14:55:52.947933 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 14:55:52.947946 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 14:55:52.947959 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 14:55:52.947972 | orchestrator | 2025-05-14 14:55:52.947985 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-14 14:55:52.947999 | orchestrator | 2025-05-14 14:55:52.948012 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-14 14:55:52.948026 | orchestrator | Wednesday 14 May 2025 14:53:05 +0000 (0:00:00.429) 0:00:00.914 ********* 2025-05-14 14:55:52.948040 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.948050 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:55:52.948057 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:55:52.948065 | orchestrator | 2025-05-14 14:55:52.948073 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:55:52.948081 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.948089 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.948097 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:55:52.948105 | orchestrator | 2025-05-14 14:55:52.948113 | orchestrator | 2025-05-14 14:55:52.948121 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:55:52.948129 | orchestrator | Wednesday 14 May 2025 14:55:23 +0000 (0:02:17.938) 0:02:18.853 ********* 2025-05-14 14:55:52.948175 | orchestrator | =============================================================================== 2025-05-14 14:55:52.948189 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 137.94s 2025-05-14 14:55:52.948202 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-05-14 14:55:52.948215 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-14 14:55:52.948228 | orchestrator | 2025-05-14 14:55:52.948241 | orchestrator | 2025-05-14 14:55:52.948255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:55:52.948269 | orchestrator | 2025-05-14 14:55:52.948291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:55:52.948316 | orchestrator | Wednesday 14 May 2025 14:53:51 +0000 (0:00:00.295) 0:00:00.295 ********* 2025-05-14 14:55:52.948332 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.948340 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:55:52.948348 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:55:52.948360 | orchestrator | 2025-05-14 14:55:52.948374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:55:52.948387 | orchestrator | Wednesday 14 May 2025 14:53:51 +0000 (0:00:00.435) 0:00:00.730 ********* 2025-05-14 14:55:52.948399 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-14 14:55:52.948407 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-14 14:55:52.948415 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-14 14:55:52.948422 | orchestrator | 2025-05-14 14:55:52.948430 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-14 14:55:52.948438 | orchestrator | 2025-05-14 14:55:52.948451 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 14:55:52.948460 | orchestrator | Wednesday 14 May 2025 14:53:52 +0000 (0:00:00.505) 0:00:01.236 ********* 2025-05-14 14:55:52.948468 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:55:52.948476 | orchestrator | 2025-05-14 14:55:52.948484 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-14 14:55:52.948498 | orchestrator | Wednesday 14 May 2025 14:53:52 +0000 (0:00:00.747) 0:00:01.984 ********* 2025-05-14 14:55:52.948513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948558 | orchestrator | 2025-05-14 14:55:52.948566 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-14 14:55:52.948574 | orchestrator | Wednesday 14 May 2025 14:53:54 +0000 (0:00:01.250) 0:00:03.235 ********* 2025-05-14 14:55:52.948588 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-14 14:55:52.948597 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-14 14:55:52.948605 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:55:52.948612 | orchestrator | 2025-05-14 14:55:52.948620 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 14:55:52.948628 | orchestrator | Wednesday 14 May 2025 14:53:54 +0000 (0:00:00.532) 0:00:03.768 ********* 2025-05-14 14:55:52.948636 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:55:52.948644 | orchestrator | 2025-05-14 14:55:52.948652 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-14 14:55:52.948660 | orchestrator | Wednesday 14 May 2025 14:53:55 +0000 (0:00:00.604) 0:00:04.372 ********* 2025-05-14 14:55:52.948678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948709 | orchestrator | 2025-05-14 14:55:52.948717 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-14 14:55:52.948725 | orchestrator | Wednesday 14 May 2025 14:53:56 +0000 (0:00:01.370) 0:00:05.742 ********* 2025-05-14 14:55:52.948733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.948755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948764 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.948778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.948795 | orchestrator | 2025-05-14 14:55:52.948803 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-14 14:55:52.948811 | orchestrator | Wednesday 14 May 2025 14:53:57 +0000 (0:00:00.641) 0:00:06.383 ********* 2025-05-14 14:55:52.948823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948832 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.948845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948859 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.948873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 14:55:52.948887 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.948898 | orchestrator | 2025-05-14 14:55:52.948911 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-14 14:55:52.948926 | orchestrator | Wednesday 14 May 2025 14:53:57 +0000 (0:00:00.682) 0:00:07.066 ********* 2025-05-14 14:55:52.948941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.948974 | orchestrator | 2025-05-14 14:55:52.948982 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-14 14:55:52.948994 | orchestrator | Wednesday 14 May 2025 14:53:59 +0000 (0:00:01.369) 0:00:08.435 ********* 2025-05-14 14:55:52.949003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.949012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.949035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.949043 | orchestrator | 2025-05-14 14:55:52.949051 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-14 14:55:52.949059 | orchestrator | Wednesday 14 May 2025 14:54:00 +0000 (0:00:01.490) 0:00:09.925 ********* 2025-05-14 14:55:52.949067 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.949075 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.949083 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.949091 | orchestrator | 2025-05-14 14:55:52.949099 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-14 14:55:52.949107 | orchestrator | Wednesday 14 May 2025 14:54:01 +0000 (0:00:00.284) 0:00:10.209 ********* 2025-05-14 14:55:52.949115 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 14:55:52.949122 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 14:55:52.949130 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 14:55:52.949171 | orchestrator | 2025-05-14 14:55:52.949181 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-14 14:55:52.949189 | orchestrator | Wednesday 14 May 2025 14:54:02 +0000 (0:00:01.377) 0:00:11.587 ********* 2025-05-14 14:55:52.949197 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 14:55:52.949205 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 14:55:52.949213 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 14:55:52.949221 | orchestrator | 2025-05-14 14:55:52.949234 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-14 14:55:52.949243 | orchestrator | Wednesday 14 May 2025 14:54:03 +0000 (0:00:01.342) 0:00:12.929 ********* 2025-05-14 14:55:52.949251 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:55:52.949259 | orchestrator | 2025-05-14 14:55:52.949267 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-14 14:55:52.949274 | orchestrator | Wednesday 14 May 2025 14:54:04 +0000 (0:00:00.352) 0:00:13.282 ********* 2025-05-14 14:55:52.949283 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-14 14:55:52.949291 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-14 14:55:52.949299 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.949306 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:55:52.949314 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:55:52.949322 | orchestrator | 2025-05-14 14:55:52.949334 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-14 14:55:52.949342 | orchestrator | Wednesday 14 May 2025 14:54:05 +0000 (0:00:00.852) 0:00:14.134 ********* 2025-05-14 14:55:52.949350 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.949358 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.949379 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.949387 | orchestrator | 2025-05-14 14:55:52.949395 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-14 14:55:52.949420 | orchestrator | Wednesday 14 May 2025 14:54:05 +0000 (0:00:00.334) 0:00:14.469 ********* 2025-05-14 14:55:52.949429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081070, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.997689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081070, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.997689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1081070, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.997689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1081016, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9816887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1081016, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9816887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1081016, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9816887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1081005, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1081005, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1081005, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081035, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9916887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.949531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081035, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9916887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1081035, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9916887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080995, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9726887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080995, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9726887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080995, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9726887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1081009, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1081009, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1081009, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9786887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081025, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081025, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1081025, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080813, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9706886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080813, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9706886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080813, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9706886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080784, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.932688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080784, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.932688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080784, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.932688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080996, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9736886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080996, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9736886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080996, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9736886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080804, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.935688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080804, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.935688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080804, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.935688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1081024, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1081024, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1081024, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9846888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080997, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9756887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080997, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9756887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1080997, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231276.9756887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081042, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.994689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081042, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.994689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1081042, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.994689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080810, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.937688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080810, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.937688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080810, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.937688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1081012, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9806886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1081012, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9806886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1081012, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9806886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080787, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.934688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080787, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.934688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080787, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.934688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080808, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9366882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080808, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9366882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080808, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9366882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1081002, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9766886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1081002, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9766886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1081002, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231276.9766886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081188, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1556911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081188, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1556911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1081188, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1556911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081179, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1126904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081179, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1126904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1081179, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1126904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081337, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1616912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081337, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1616912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1081337, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1616912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081097, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0976903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081097, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0976903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1081097, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0976903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081346, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1636913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081346, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1636913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.950991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1081346, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1636913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081309, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1576912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081309, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1576912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1081309, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1576912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081314, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1586912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081314, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1586912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1081314, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1586912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081137, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0986903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081137, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0986903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1081137, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0986903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081183, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1136906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081183, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1136906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1081183, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1136906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081355, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1666913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081355, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1666913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1081355, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1666913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081324, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1606913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081324, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1606913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1081324, 'dev': 127, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747231277.1606913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081145, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1026905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081145, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1026905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081143, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0996904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1081145, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1026905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081143, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0996904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081154, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1056905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1081143, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.0996904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081154, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1056905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1081154, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1056905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081161, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1116905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081161, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1116905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1081161, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1116905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081365, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1736913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081365, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1736913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081365, 'dev': 127, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747231277.1736913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 14:55:52.951483 | orchestrator | 2025-05-14 14:55:52.951491 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-14 14:55:52.951500 | orchestrator | Wednesday 14 May 2025 14:54:39 +0000 (0:00:33.898) 0:00:48.368 ********* 2025-05-14 14:55:52.951514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.951527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.951536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 14:55:52.951549 | orchestrator | 2025-05-14 14:55:52.951557 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-14 14:55:52.951565 | orchestrator | Wednesday 14 May 2025 14:54:40 +0000 (0:00:01.089) 0:00:49.457 ********* 2025-05-14 14:55:52.951573 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:55:52.951581 | orchestrator | 2025-05-14 14:55:52.951589 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-14 14:55:52.951597 | orchestrator | Wednesday 14 May 2025 14:54:42 +0000 (0:00:02.575) 0:00:52.032 ********* 2025-05-14 14:55:52.951604 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:55:52.951612 | orchestrator | 2025-05-14 14:55:52.951620 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 14:55:52.951628 | orchestrator | Wednesday 14 May 2025 14:54:45 +0000 (0:00:02.296) 0:00:54.328 ********* 2025-05-14 14:55:52.951636 | orchestrator | 2025-05-14 14:55:52.951644 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 14:55:52.951651 | orchestrator | Wednesday 14 May 2025 14:54:45 +0000 (0:00:00.071) 0:00:54.400 ********* 2025-05-14 14:55:52.951659 | orchestrator | 2025-05-14 14:55:52.951667 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 14:55:52.951675 | orchestrator | Wednesday 14 May 2025 14:54:45 +0000 (0:00:00.052) 0:00:54.453 ********* 2025-05-14 14:55:52.951682 | orchestrator | 2025-05-14 14:55:52.951690 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-14 14:55:52.951698 | orchestrator | Wednesday 14 May 2025 14:54:45 +0000 (0:00:00.186) 0:00:54.640 ********* 2025-05-14 14:55:52.951706 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.951713 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.951721 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:55:52.951729 | orchestrator | 2025-05-14 14:55:52.951737 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-14 14:55:52.951757 | orchestrator | Wednesday 14 May 2025 14:54:47 +0000 (0:00:01.827) 0:00:56.467 ********* 2025-05-14 14:55:52.951764 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.951772 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.951780 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-14 14:55:52.951788 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-14 14:55:52.951796 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-14 14:55:52.951804 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.951813 | orchestrator | 2025-05-14 14:55:52.951820 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-14 14:55:52.951828 | orchestrator | Wednesday 14 May 2025 14:55:26 +0000 (0:00:39.390) 0:01:35.857 ********* 2025-05-14 14:55:52.951836 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.951844 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:55:52.951852 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:55:52.951859 | orchestrator | 2025-05-14 14:55:52.951867 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-14 14:55:52.951875 | orchestrator | Wednesday 14 May 2025 14:55:46 +0000 (0:00:19.666) 0:01:55.524 ********* 2025-05-14 14:55:52.951883 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:55:52.951893 | orchestrator | 2025-05-14 14:55:52.951906 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-14 14:55:52.951982 | orchestrator | Wednesday 14 May 2025 14:55:48 +0000 (0:00:02.331) 0:01:57.855 ********* 2025-05-14 14:55:52.951999 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.952013 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:55:52.952026 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:55:52.952039 | orchestrator | 2025-05-14 14:55:52.952053 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-14 14:55:52.952077 | orchestrator | Wednesday 14 May 2025 14:55:49 +0000 (0:00:00.442) 0:01:58.298 ********* 2025-05-14 14:55:52.952092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-14 14:55:52.952114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-14 14:55:52.952123 | orchestrator | 2025-05-14 14:55:52.952151 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-14 14:55:52.952164 | orchestrator | Wednesday 14 May 2025 14:55:51 +0000 (0:00:02.812) 0:02:01.110 ********* 2025-05-14 14:55:52.952172 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:55:52.952180 | orchestrator | 2025-05-14 14:55:52.952188 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:55:52.952197 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:55:52.952206 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:55:52.952214 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 14:55:52.952222 | orchestrator | 2025-05-14 14:55:52.952230 | orchestrator | 2025-05-14 14:55:52.952238 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:55:52.952246 | orchestrator | Wednesday 14 May 2025 14:55:52 +0000 (0:00:00.410) 0:02:01.520 ********* 2025-05-14 14:55:52.952253 | orchestrator | =============================================================================== 2025-05-14 14:55:52.952261 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.39s 2025-05-14 14:55:52.952269 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.90s 2025-05-14 14:55:52.952276 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.67s 2025-05-14 14:55:52.952284 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.81s 2025-05-14 14:55:52.952292 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.58s 2025-05-14 14:55:52.952300 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.33s 2025-05-14 14:55:52.952308 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2025-05-14 14:55:52.952317 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.83s 2025-05-14 14:55:52.952331 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2025-05-14 14:55:52.952343 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2025-05-14 14:55:52.952357 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2025-05-14 14:55:52.952371 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-05-14 14:55:52.952383 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2025-05-14 14:55:52.952391 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.25s 2025-05-14 14:55:52.952399 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2025-05-14 14:55:52.952409 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.85s 2025-05-14 14:55:52.952423 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-05-14 14:55:52.952445 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2025-05-14 14:55:52.952459 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.64s 2025-05-14 14:55:52.952474 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2025-05-14 14:55:52.952485 | orchestrator | 2025-05-14 14:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:55.996206 | orchestrator | 2025-05-14 14:55:55 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:55.996322 | orchestrator | 2025-05-14 14:55:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:55.996338 | orchestrator | 2025-05-14 14:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:55:59.056056 | orchestrator | 2025-05-14 14:55:59 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:55:59.057922 | orchestrator | 2025-05-14 14:55:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:55:59.058198 | orchestrator | 2025-05-14 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:02.122094 | orchestrator | 2025-05-14 14:56:02 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:02.123405 | orchestrator | 2025-05-14 14:56:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:02.123451 | orchestrator | 2025-05-14 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:05.165249 | orchestrator | 2025-05-14 14:56:05 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:05.166781 | orchestrator | 2025-05-14 14:56:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:05.166815 | orchestrator | 2025-05-14 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:08.215910 | orchestrator | 2025-05-14 14:56:08 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:08.218521 | orchestrator | 2025-05-14 14:56:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:08.218617 | orchestrator | 2025-05-14 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:11.256591 | orchestrator | 2025-05-14 14:56:11 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:11.258373 | orchestrator | 2025-05-14 14:56:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:11.258516 | orchestrator | 2025-05-14 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:14.301264 | orchestrator | 2025-05-14 14:56:14 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:14.301350 | orchestrator | 2025-05-14 14:56:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:14.301365 | orchestrator | 2025-05-14 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:17.328972 | orchestrator | 2025-05-14 14:56:17 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:17.330241 | orchestrator | 2025-05-14 14:56:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:17.330296 | orchestrator | 2025-05-14 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:20.384767 | orchestrator | 2025-05-14 14:56:20 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:20.384871 | orchestrator | 2025-05-14 14:56:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:20.384911 | orchestrator | 2025-05-14 14:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:23.427704 | orchestrator | 2025-05-14 14:56:23 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:23.427951 | orchestrator | 2025-05-14 14:56:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:23.427969 | orchestrator | 2025-05-14 14:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:26.474870 | orchestrator | 2025-05-14 14:56:26 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:26.476349 | orchestrator | 2025-05-14 14:56:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:26.476432 | orchestrator | 2025-05-14 14:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:29.525677 | orchestrator | 2025-05-14 14:56:29 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:29.527710 | orchestrator | 2025-05-14 14:56:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:29.527776 | orchestrator | 2025-05-14 14:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:32.571355 | orchestrator | 2025-05-14 14:56:32 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:32.571965 | orchestrator | 2025-05-14 14:56:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:32.572302 | orchestrator | 2025-05-14 14:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:35.621272 | orchestrator | 2025-05-14 14:56:35 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:35.621384 | orchestrator | 2025-05-14 14:56:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:35.621401 | orchestrator | 2025-05-14 14:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:38.656626 | orchestrator | 2025-05-14 14:56:38 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:38.656748 | orchestrator | 2025-05-14 14:56:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:38.656762 | orchestrator | 2025-05-14 14:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:41.709560 | orchestrator | 2025-05-14 14:56:41 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:41.713327 | orchestrator | 2025-05-14 14:56:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:41.713916 | orchestrator | 2025-05-14 14:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:44.772984 | orchestrator | 2025-05-14 14:56:44 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:44.773764 | orchestrator | 2025-05-14 14:56:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:44.773801 | orchestrator | 2025-05-14 14:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:47.820531 | orchestrator | 2025-05-14 14:56:47 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:47.820672 | orchestrator | 2025-05-14 14:56:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:47.820698 | orchestrator | 2025-05-14 14:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:50.875999 | orchestrator | 2025-05-14 14:56:50 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:50.876202 | orchestrator | 2025-05-14 14:56:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:50.876223 | orchestrator | 2025-05-14 14:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:53.920852 | orchestrator | 2025-05-14 14:56:53 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:53.921310 | orchestrator | 2025-05-14 14:56:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:53.921343 | orchestrator | 2025-05-14 14:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:56:56.964656 | orchestrator | 2025-05-14 14:56:56 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:56:56.966132 | orchestrator | 2025-05-14 14:56:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:56:56.966165 | orchestrator | 2025-05-14 14:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:00.016406 | orchestrator | 2025-05-14 14:57:00 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:00.016909 | orchestrator | 2025-05-14 14:57:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:00.016950 | orchestrator | 2025-05-14 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:03.071723 | orchestrator | 2025-05-14 14:57:03 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:03.073254 | orchestrator | 2025-05-14 14:57:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:03.073307 | orchestrator | 2025-05-14 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:06.119942 | orchestrator | 2025-05-14 14:57:06 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:06.121699 | orchestrator | 2025-05-14 14:57:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:06.121715 | orchestrator | 2025-05-14 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:09.168202 | orchestrator | 2025-05-14 14:57:09 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:09.169742 | orchestrator | 2025-05-14 14:57:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:09.169886 | orchestrator | 2025-05-14 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:12.212073 | orchestrator | 2025-05-14 14:57:12 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:12.212227 | orchestrator | 2025-05-14 14:57:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:12.212246 | orchestrator | 2025-05-14 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:15.259061 | orchestrator | 2025-05-14 14:57:15 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:15.259624 | orchestrator | 2025-05-14 14:57:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:15.259649 | orchestrator | 2025-05-14 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:18.302483 | orchestrator | 2025-05-14 14:57:18 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:18.303754 | orchestrator | 2025-05-14 14:57:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:18.303789 | orchestrator | 2025-05-14 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:21.348937 | orchestrator | 2025-05-14 14:57:21 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:21.351152 | orchestrator | 2025-05-14 14:57:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:21.351197 | orchestrator | 2025-05-14 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:24.386923 | orchestrator | 2025-05-14 14:57:24 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:24.387020 | orchestrator | 2025-05-14 14:57:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:24.387035 | orchestrator | 2025-05-14 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:27.425174 | orchestrator | 2025-05-14 14:57:27 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:27.427154 | orchestrator | 2025-05-14 14:57:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:27.427225 | orchestrator | 2025-05-14 14:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:30.484021 | orchestrator | 2025-05-14 14:57:30 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:30.485208 | orchestrator | 2025-05-14 14:57:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:30.485297 | orchestrator | 2025-05-14 14:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:33.527735 | orchestrator | 2025-05-14 14:57:33 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:33.528257 | orchestrator | 2025-05-14 14:57:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:33.528389 | orchestrator | 2025-05-14 14:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:36.571375 | orchestrator | 2025-05-14 14:57:36 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:36.572593 | orchestrator | 2025-05-14 14:57:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:36.572624 | orchestrator | 2025-05-14 14:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:39.620067 | orchestrator | 2025-05-14 14:57:39 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:39.621332 | orchestrator | 2025-05-14 14:57:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:39.621363 | orchestrator | 2025-05-14 14:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:42.668789 | orchestrator | 2025-05-14 14:57:42 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:42.669992 | orchestrator | 2025-05-14 14:57:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:42.670238 | orchestrator | 2025-05-14 14:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:45.721572 | orchestrator | 2025-05-14 14:57:45 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:45.722352 | orchestrator | 2025-05-14 14:57:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:45.722385 | orchestrator | 2025-05-14 14:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:48.763769 | orchestrator | 2025-05-14 14:57:48 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:48.764497 | orchestrator | 2025-05-14 14:57:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:48.764564 | orchestrator | 2025-05-14 14:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:51.805604 | orchestrator | 2025-05-14 14:57:51 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:51.806505 | orchestrator | 2025-05-14 14:57:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:51.806527 | orchestrator | 2025-05-14 14:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:54.854077 | orchestrator | 2025-05-14 14:57:54 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:54.854893 | orchestrator | 2025-05-14 14:57:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:54.854925 | orchestrator | 2025-05-14 14:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:57:57.885299 | orchestrator | 2025-05-14 14:57:57 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:57:57.885417 | orchestrator | 2025-05-14 14:57:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:57:57.885435 | orchestrator | 2025-05-14 14:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:00.932850 | orchestrator | 2025-05-14 14:58:00 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:00.934441 | orchestrator | 2025-05-14 14:58:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:00.934502 | orchestrator | 2025-05-14 14:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:03.981853 | orchestrator | 2025-05-14 14:58:03 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:03.983429 | orchestrator | 2025-05-14 14:58:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:03.983469 | orchestrator | 2025-05-14 14:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:07.026364 | orchestrator | 2025-05-14 14:58:07 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:07.026777 | orchestrator | 2025-05-14 14:58:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:07.026819 | orchestrator | 2025-05-14 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:10.070486 | orchestrator | 2025-05-14 14:58:10 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:10.072082 | orchestrator | 2025-05-14 14:58:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:10.072144 | orchestrator | 2025-05-14 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:13.132029 | orchestrator | 2025-05-14 14:58:13 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:13.132134 | orchestrator | 2025-05-14 14:58:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:13.132146 | orchestrator | 2025-05-14 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:16.188069 | orchestrator | 2025-05-14 14:58:16 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:16.188228 | orchestrator | 2025-05-14 14:58:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:16.188245 | orchestrator | 2025-05-14 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:19.240293 | orchestrator | 2025-05-14 14:58:19 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:19.241109 | orchestrator | 2025-05-14 14:58:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:19.241148 | orchestrator | 2025-05-14 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:22.284270 | orchestrator | 2025-05-14 14:58:22 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:22.285473 | orchestrator | 2025-05-14 14:58:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:22.285495 | orchestrator | 2025-05-14 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:25.330787 | orchestrator | 2025-05-14 14:58:25 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:25.331915 | orchestrator | 2025-05-14 14:58:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:25.331949 | orchestrator | 2025-05-14 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:28.381380 | orchestrator | 2025-05-14 14:58:28 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:28.382517 | orchestrator | 2025-05-14 14:58:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:28.382729 | orchestrator | 2025-05-14 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:31.438648 | orchestrator | 2025-05-14 14:58:31 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:31.440473 | orchestrator | 2025-05-14 14:58:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:31.440527 | orchestrator | 2025-05-14 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:34.493496 | orchestrator | 2025-05-14 14:58:34 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:34.495226 | orchestrator | 2025-05-14 14:58:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:34.495248 | orchestrator | 2025-05-14 14:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:37.541986 | orchestrator | 2025-05-14 14:58:37 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:37.543572 | orchestrator | 2025-05-14 14:58:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:37.543609 | orchestrator | 2025-05-14 14:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:40.597644 | orchestrator | 2025-05-14 14:58:40 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:40.598430 | orchestrator | 2025-05-14 14:58:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:40.598464 | orchestrator | 2025-05-14 14:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:43.649950 | orchestrator | 2025-05-14 14:58:43 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:43.650817 | orchestrator | 2025-05-14 14:58:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:43.650851 | orchestrator | 2025-05-14 14:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:46.690789 | orchestrator | 2025-05-14 14:58:46 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:46.692019 | orchestrator | 2025-05-14 14:58:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:46.692048 | orchestrator | 2025-05-14 14:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:49.729247 | orchestrator | 2025-05-14 14:58:49 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:49.730541 | orchestrator | 2025-05-14 14:58:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:49.730642 | orchestrator | 2025-05-14 14:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:52.776711 | orchestrator | 2025-05-14 14:58:52 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:52.777637 | orchestrator | 2025-05-14 14:58:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:52.777651 | orchestrator | 2025-05-14 14:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:55.825687 | orchestrator | 2025-05-14 14:58:55 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:55.827238 | orchestrator | 2025-05-14 14:58:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:55.827256 | orchestrator | 2025-05-14 14:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:58:58.898520 | orchestrator | 2025-05-14 14:58:58 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:58:58.898713 | orchestrator | 2025-05-14 14:58:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:58:58.898814 | orchestrator | 2025-05-14 14:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:01.922316 | orchestrator | 2025-05-14 14:59:01 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:01.922932 | orchestrator | 2025-05-14 14:59:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:01.922964 | orchestrator | 2025-05-14 14:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:04.952312 | orchestrator | 2025-05-14 14:59:04 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:04.954205 | orchestrator | 2025-05-14 14:59:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:04.954242 | orchestrator | 2025-05-14 14:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:07.986208 | orchestrator | 2025-05-14 14:59:07 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:07.987735 | orchestrator | 2025-05-14 14:59:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:07.987809 | orchestrator | 2025-05-14 14:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:11.040550 | orchestrator | 2025-05-14 14:59:11 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:11.041985 | orchestrator | 2025-05-14 14:59:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:11.042065 | orchestrator | 2025-05-14 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:14.094598 | orchestrator | 2025-05-14 14:59:14 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:14.095796 | orchestrator | 2025-05-14 14:59:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:14.095830 | orchestrator | 2025-05-14 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:17.139737 | orchestrator | 2025-05-14 14:59:17 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:17.140770 | orchestrator | 2025-05-14 14:59:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:17.140902 | orchestrator | 2025-05-14 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:20.185322 | orchestrator | 2025-05-14 14:59:20 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:20.186606 | orchestrator | 2025-05-14 14:59:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:20.186641 | orchestrator | 2025-05-14 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:23.241738 | orchestrator | 2025-05-14 14:59:23 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:23.242573 | orchestrator | 2025-05-14 14:59:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:23.242614 | orchestrator | 2025-05-14 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:26.283930 | orchestrator | 2025-05-14 14:59:26 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:26.284922 | orchestrator | 2025-05-14 14:59:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:26.285035 | orchestrator | 2025-05-14 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:29.335202 | orchestrator | 2025-05-14 14:59:29 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:29.337042 | orchestrator | 2025-05-14 14:59:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:29.337127 | orchestrator | 2025-05-14 14:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:32.389217 | orchestrator | 2025-05-14 14:59:32 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:32.390578 | orchestrator | 2025-05-14 14:59:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:32.390650 | orchestrator | 2025-05-14 14:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:35.443678 | orchestrator | 2025-05-14 14:59:35 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:35.449771 | orchestrator | 2025-05-14 14:59:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:35.450509 | orchestrator | 2025-05-14 14:59:35 | INFO  | Task 1b92cc7c-9434-4cad-803d-52db3533f59e is in state STARTED 2025-05-14 14:59:35.450538 | orchestrator | 2025-05-14 14:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:38.506524 | orchestrator | 2025-05-14 14:59:38 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state STARTED 2025-05-14 14:59:38.509002 | orchestrator | 2025-05-14 14:59:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:38.510693 | orchestrator | 2025-05-14 14:59:38 | INFO  | Task 1b92cc7c-9434-4cad-803d-52db3533f59e is in state STARTED 2025-05-14 14:59:38.511037 | orchestrator | 2025-05-14 14:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:41.563426 | orchestrator | 2025-05-14 14:59:41 | INFO  | Task f12b395a-609c-490b-b0ce-b4d1646a3696 is in state SUCCESS 2025-05-14 14:59:41.564692 | orchestrator | 2025-05-14 14:59:41.564733 | orchestrator | 2025-05-14 14:59:41.564745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 14:59:41.564756 | orchestrator | 2025-05-14 14:59:41.564766 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-14 14:59:41.564777 | orchestrator | Wednesday 14 May 2025 14:51:15 +0000 (0:00:00.825) 0:00:00.825 ********* 2025-05-14 14:59:41.564787 | orchestrator | changed: [testbed-manager] 2025-05-14 14:59:41.564798 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.564838 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.564867 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.564954 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.564964 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.564974 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.565049 | orchestrator | 2025-05-14 14:59:41.565061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 14:59:41.565071 | orchestrator | Wednesday 14 May 2025 14:51:17 +0000 (0:00:01.705) 0:00:02.531 ********* 2025-05-14 14:59:41.565104 | orchestrator | changed: [testbed-manager] 2025-05-14 14:59:41.565113 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565123 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.565132 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.565142 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.565151 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.565160 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.565257 | orchestrator | 2025-05-14 14:59:41.565268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 14:59:41.565279 | orchestrator | Wednesday 14 May 2025 14:51:20 +0000 (0:00:02.506) 0:00:05.037 ********* 2025-05-14 14:59:41.565289 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-14 14:59:41.565301 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 14:59:41.565312 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 14:59:41.565322 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 14:59:41.565332 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-14 14:59:41.565343 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-14 14:59:41.565353 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-14 14:59:41.565364 | orchestrator | 2025-05-14 14:59:41.565375 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-14 14:59:41.565385 | orchestrator | 2025-05-14 14:59:41.565394 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 14:59:41.565404 | orchestrator | Wednesday 14 May 2025 14:51:23 +0000 (0:00:02.887) 0:00:07.925 ********* 2025-05-14 14:59:41.565413 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.565423 | orchestrator | 2025-05-14 14:59:41.565432 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-14 14:59:41.565442 | orchestrator | Wednesday 14 May 2025 14:51:25 +0000 (0:00:02.037) 0:00:09.963 ********* 2025-05-14 14:59:41.565452 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-14 14:59:41.565462 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-14 14:59:41.565472 | orchestrator | 2025-05-14 14:59:41.565481 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-14 14:59:41.565491 | orchestrator | Wednesday 14 May 2025 14:51:30 +0000 (0:00:05.123) 0:00:15.086 ********* 2025-05-14 14:59:41.565501 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:59:41.565510 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 14:59:41.565520 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565529 | orchestrator | 2025-05-14 14:59:41.565539 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 14:59:41.565548 | orchestrator | Wednesday 14 May 2025 14:51:34 +0000 (0:00:04.640) 0:00:19.727 ********* 2025-05-14 14:59:41.565557 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565567 | orchestrator | 2025-05-14 14:59:41.565576 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-14 14:59:41.565586 | orchestrator | Wednesday 14 May 2025 14:51:35 +0000 (0:00:00.949) 0:00:20.677 ********* 2025-05-14 14:59:41.565596 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565605 | orchestrator | 2025-05-14 14:59:41.565633 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-14 14:59:41.565655 | orchestrator | Wednesday 14 May 2025 14:51:37 +0000 (0:00:01.828) 0:00:22.505 ********* 2025-05-14 14:59:41.565665 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565684 | orchestrator | 2025-05-14 14:59:41.565694 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 14:59:41.565703 | orchestrator | Wednesday 14 May 2025 14:51:41 +0000 (0:00:03.875) 0:00:26.380 ********* 2025-05-14 14:59:41.565713 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.565722 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.565765 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.565776 | orchestrator | 2025-05-14 14:59:41.565786 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 14:59:41.565795 | orchestrator | Wednesday 14 May 2025 14:51:41 +0000 (0:00:00.325) 0:00:26.705 ********* 2025-05-14 14:59:41.565805 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.565815 | orchestrator | 2025-05-14 14:59:41.565824 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-14 14:59:41.565834 | orchestrator | Wednesday 14 May 2025 14:52:12 +0000 (0:00:30.441) 0:00:57.147 ********* 2025-05-14 14:59:41.565844 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.565853 | orchestrator | 2025-05-14 14:59:41.565863 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 14:59:41.565873 | orchestrator | Wednesday 14 May 2025 14:52:26 +0000 (0:00:13.988) 0:01:11.135 ********* 2025-05-14 14:59:41.565882 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.565892 | orchestrator | 2025-05-14 14:59:41.565901 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 14:59:41.565911 | orchestrator | Wednesday 14 May 2025 14:52:37 +0000 (0:00:11.257) 0:01:22.393 ********* 2025-05-14 14:59:41.565935 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.565945 | orchestrator | 2025-05-14 14:59:41.565955 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-14 14:59:41.565964 | orchestrator | Wednesday 14 May 2025 14:52:38 +0000 (0:00:00.843) 0:01:23.236 ********* 2025-05-14 14:59:41.565974 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.565983 | orchestrator | 2025-05-14 14:59:41.565993 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 14:59:41.566009 | orchestrator | Wednesday 14 May 2025 14:52:38 +0000 (0:00:00.589) 0:01:23.826 ********* 2025-05-14 14:59:41.566148 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.566163 | orchestrator | 2025-05-14 14:59:41.566173 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 14:59:41.566182 | orchestrator | Wednesday 14 May 2025 14:52:39 +0000 (0:00:00.701) 0:01:24.527 ********* 2025-05-14 14:59:41.566192 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.566202 | orchestrator | 2025-05-14 14:59:41.566211 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 14:59:41.566221 | orchestrator | Wednesday 14 May 2025 14:52:54 +0000 (0:00:15.151) 0:01:39.678 ********* 2025-05-14 14:59:41.566230 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.566240 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566249 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566258 | orchestrator | 2025-05-14 14:59:41.566268 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-14 14:59:41.566278 | orchestrator | 2025-05-14 14:59:41.566287 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 14:59:41.566296 | orchestrator | Wednesday 14 May 2025 14:52:55 +0000 (0:00:00.353) 0:01:40.032 ********* 2025-05-14 14:59:41.566306 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.566316 | orchestrator | 2025-05-14 14:59:41.566325 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-14 14:59:41.566335 | orchestrator | Wednesday 14 May 2025 14:52:56 +0000 (0:00:01.102) 0:01:41.134 ********* 2025-05-14 14:59:41.566353 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566362 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566372 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.566382 | orchestrator | 2025-05-14 14:59:41.566391 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-14 14:59:41.566400 | orchestrator | Wednesday 14 May 2025 14:52:58 +0000 (0:00:02.367) 0:01:43.502 ********* 2025-05-14 14:59:41.566410 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566429 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.566438 | orchestrator | 2025-05-14 14:59:41.566447 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 14:59:41.566457 | orchestrator | Wednesday 14 May 2025 14:53:01 +0000 (0:00:02.550) 0:01:46.053 ********* 2025-05-14 14:59:41.566466 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.566476 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566485 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566495 | orchestrator | 2025-05-14 14:59:41.566504 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 14:59:41.566514 | orchestrator | Wednesday 14 May 2025 14:53:02 +0000 (0:00:00.953) 0:01:47.006 ********* 2025-05-14 14:59:41.566523 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 14:59:41.566533 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566543 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 14:59:41.566552 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566562 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 14:59:41.566571 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-14 14:59:41.566580 | orchestrator | 2025-05-14 14:59:41.566590 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 14:59:41.566599 | orchestrator | Wednesday 14 May 2025 14:53:12 +0000 (0:00:10.085) 0:01:57.091 ********* 2025-05-14 14:59:41.566608 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.566618 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566627 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566636 | orchestrator | 2025-05-14 14:59:41.566646 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 14:59:41.566655 | orchestrator | Wednesday 14 May 2025 14:53:12 +0000 (0:00:00.587) 0:01:57.678 ********* 2025-05-14 14:59:41.566665 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 14:59:41.566674 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 14:59:41.566684 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.566693 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566703 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 14:59:41.566712 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566722 | orchestrator | 2025-05-14 14:59:41.566731 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 14:59:41.566741 | orchestrator | Wednesday 14 May 2025 14:53:14 +0000 (0:00:01.470) 0:01:59.149 ********* 2025-05-14 14:59:41.566750 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566760 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.566769 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566778 | orchestrator | 2025-05-14 14:59:41.566788 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-14 14:59:41.566797 | orchestrator | Wednesday 14 May 2025 14:53:14 +0000 (0:00:00.564) 0:01:59.714 ********* 2025-05-14 14:59:41.566807 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566816 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566826 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.566835 | orchestrator | 2025-05-14 14:59:41.566844 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-14 14:59:41.566854 | orchestrator | Wednesday 14 May 2025 14:53:15 +0000 (0:00:01.007) 0:02:00.721 ********* 2025-05-14 14:59:41.566870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566879 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566911 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.566922 | orchestrator | 2025-05-14 14:59:41.566932 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-14 14:59:41.566941 | orchestrator | Wednesday 14 May 2025 14:53:18 +0000 (0:00:02.491) 0:02:03.212 ********* 2025-05-14 14:59:41.566951 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.566960 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.566969 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.566979 | orchestrator | 2025-05-14 14:59:41.567002 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 14:59:41.567011 | orchestrator | Wednesday 14 May 2025 14:53:39 +0000 (0:00:21.413) 0:02:24.626 ********* 2025-05-14 14:59:41.567021 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.567030 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.567039 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.567049 | orchestrator | 2025-05-14 14:59:41.567059 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 14:59:41.567068 | orchestrator | Wednesday 14 May 2025 14:53:51 +0000 (0:00:11.581) 0:02:36.208 ********* 2025-05-14 14:59:41.567094 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.567104 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.567114 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.567124 | orchestrator | 2025-05-14 14:59:41.567133 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-14 14:59:41.567187 | orchestrator | Wednesday 14 May 2025 14:53:52 +0000 (0:00:01.255) 0:02:37.464 ********* 2025-05-14 14:59:41.567197 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.567207 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.567216 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.567226 | orchestrator | 2025-05-14 14:59:41.567235 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-14 14:59:41.567245 | orchestrator | Wednesday 14 May 2025 14:54:03 +0000 (0:00:11.194) 0:02:48.658 ********* 2025-05-14 14:59:41.567254 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.567264 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.567273 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.567283 | orchestrator | 2025-05-14 14:59:41.567292 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 14:59:41.567302 | orchestrator | Wednesday 14 May 2025 14:54:04 +0000 (0:00:01.129) 0:02:49.788 ********* 2025-05-14 14:59:41.567311 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.567321 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.567330 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.567339 | orchestrator | 2025-05-14 14:59:41.567349 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-14 14:59:41.567358 | orchestrator | 2025-05-14 14:59:41.567368 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 14:59:41.567377 | orchestrator | Wednesday 14 May 2025 14:54:05 +0000 (0:00:00.367) 0:02:50.155 ********* 2025-05-14 14:59:41.567387 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.567398 | orchestrator | 2025-05-14 14:59:41.567407 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-14 14:59:41.567417 | orchestrator | Wednesday 14 May 2025 14:54:05 +0000 (0:00:00.541) 0:02:50.697 ********* 2025-05-14 14:59:41.567426 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-14 14:59:41.567436 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-14 14:59:41.567445 | orchestrator | 2025-05-14 14:59:41.567455 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-14 14:59:41.567475 | orchestrator | Wednesday 14 May 2025 14:54:09 +0000 (0:00:03.628) 0:02:54.326 ********* 2025-05-14 14:59:41.567495 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-14 14:59:41.567507 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-14 14:59:41.567517 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-14 14:59:41.567526 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-14 14:59:41.567536 | orchestrator | 2025-05-14 14:59:41.567546 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-14 14:59:41.567556 | orchestrator | Wednesday 14 May 2025 14:54:16 +0000 (0:00:06.923) 0:03:01.250 ********* 2025-05-14 14:59:41.567565 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 14:59:41.567758 | orchestrator | 2025-05-14 14:59:41.567769 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-14 14:59:41.567778 | orchestrator | Wednesday 14 May 2025 14:54:19 +0000 (0:00:03.198) 0:03:04.448 ********* 2025-05-14 14:59:41.567788 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 14:59:41.567798 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-14 14:59:41.567807 | orchestrator | 2025-05-14 14:59:41.567817 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-14 14:59:41.567827 | orchestrator | Wednesday 14 May 2025 14:54:23 +0000 (0:00:04.080) 0:03:08.528 ********* 2025-05-14 14:59:41.567836 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 14:59:41.567846 | orchestrator | 2025-05-14 14:59:41.567855 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-14 14:59:41.567865 | orchestrator | Wednesday 14 May 2025 14:54:27 +0000 (0:00:03.567) 0:03:12.096 ********* 2025-05-14 14:59:41.567874 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-14 14:59:41.567884 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-14 14:59:41.567893 | orchestrator | 2025-05-14 14:59:41.567902 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 14:59:41.567919 | orchestrator | Wednesday 14 May 2025 14:54:35 +0000 (0:00:08.518) 0:03:20.614 ********* 2025-05-14 14:59:41.567942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.567958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.567979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.568000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568098 | orchestrator | 2025-05-14 14:59:41.568108 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-14 14:59:41.568118 | orchestrator | Wednesday 14 May 2025 14:54:37 +0000 (0:00:01.491) 0:03:22.106 ********* 2025-05-14 14:59:41.568128 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.568138 | orchestrator | 2025-05-14 14:59:41.568147 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-14 14:59:41.568157 | orchestrator | Wednesday 14 May 2025 14:54:37 +0000 (0:00:00.251) 0:03:22.357 ********* 2025-05-14 14:59:41.568166 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.568176 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.568185 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.568195 | orchestrator | 2025-05-14 14:59:41.568205 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-14 14:59:41.568214 | orchestrator | Wednesday 14 May 2025 14:54:37 +0000 (0:00:00.271) 0:03:22.629 ********* 2025-05-14 14:59:41.568224 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 14:59:41.568234 | orchestrator | 2025-05-14 14:59:41.568249 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-14 14:59:41.568260 | orchestrator | Wednesday 14 May 2025 14:54:38 +0000 (0:00:00.531) 0:03:23.160 ********* 2025-05-14 14:59:41.568270 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.568279 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.568289 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.568298 | orchestrator | 2025-05-14 14:59:41.568307 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 14:59:41.568321 | orchestrator | Wednesday 14 May 2025 14:54:38 +0000 (0:00:00.292) 0:03:23.452 ********* 2025-05-14 14:59:41.568344 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.568354 | orchestrator | 2025-05-14 14:59:41.568364 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 14:59:41.568374 | orchestrator | Wednesday 14 May 2025 14:54:39 +0000 (0:00:00.776) 0:03:24.229 ********* 2025-05-14 14:59:41.568402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.568414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.568433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.568450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.568488 | orchestrator | 2025-05-14 14:59:41.568497 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 14:59:41.568507 | orchestrator | Wednesday 14 May 2025 14:54:41 +0000 (0:00:02.607) 0:03:26.836 ********* 2025-05-14 14:59:41.568536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568563 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.568579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568606 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.568616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568637 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.568753 | orchestrator | 2025-05-14 14:59:41.568764 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 14:59:41.568774 | orchestrator | Wednesday 14 May 2025 14:54:42 +0000 (0:00:00.628) 0:03:27.465 ********* 2025-05-14 14:59:41.568808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568838 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.568849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.568924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.568947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.568958 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.568967 | orchestrator | 2025-05-14 14:59:41.568977 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-14 14:59:41.568987 | orchestrator | Wednesday 14 May 2025 14:54:43 +0000 (0:00:01.070) 0:03:28.536 ********* 2025-05-14 14:59:41.569068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569228 | orchestrator | 2025-05-14 14:59:41.569237 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-14 14:59:41.569247 | orchestrator | Wednesday 14 May 2025 14:54:46 +0000 (0:00:02.637) 0:03:31.173 ********* 2025-05-14 14:59:41.569258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569382 | orchestrator | 2025-05-14 14:59:41.569392 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-14 14:59:41.569401 | orchestrator | Wednesday 14 May 2025 14:54:51 +0000 (0:00:05.258) 0:03:36.432 ********* 2025-05-14 14:59:41.569417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.569428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569448 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.569459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.569488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569509 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.569519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 14:59:41.569530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569556 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.569566 | orchestrator | 2025-05-14 14:59:41.569576 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-14 14:59:41.569585 | orchestrator | Wednesday 14 May 2025 14:54:52 +0000 (0:00:00.691) 0:03:37.123 ********* 2025-05-14 14:59:41.569595 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.569605 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.569614 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.569635 | orchestrator | 2025-05-14 14:59:41.569645 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-14 14:59:41.569655 | orchestrator | Wednesday 14 May 2025 14:54:53 +0000 (0:00:01.657) 0:03:38.781 ********* 2025-05-14 14:59:41.569671 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.569681 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.569690 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.569699 | orchestrator | 2025-05-14 14:59:41.569709 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-14 14:59:41.569738 | orchestrator | Wednesday 14 May 2025 14:54:54 +0000 (0:00:00.448) 0:03:39.229 ********* 2025-05-14 14:59:41.569754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 14:59:41.569816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.569925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.569946 | orchestrator | 2025-05-14 14:59:41.569997 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 14:59:41.570007 | orchestrator | Wednesday 14 May 2025 14:54:56 +0000 (0:00:02.160) 0:03:41.390 ********* 2025-05-14 14:59:41.570049 | orchestrator | 2025-05-14 14:59:41.570062 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 14:59:41.570072 | orchestrator | Wednesday 14 May 2025 14:54:56 +0000 (0:00:00.266) 0:03:41.657 ********* 2025-05-14 14:59:41.570098 | orchestrator | 2025-05-14 14:59:41.570108 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 14:59:41.570118 | orchestrator | Wednesday 14 May 2025 14:54:56 +0000 (0:00:00.109) 0:03:41.767 ********* 2025-05-14 14:59:41.570127 | orchestrator | 2025-05-14 14:59:41.570137 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-14 14:59:41.570154 | orchestrator | Wednesday 14 May 2025 14:54:57 +0000 (0:00:00.274) 0:03:42.041 ********* 2025-05-14 14:59:41.570164 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.570174 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.570183 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.570192 | orchestrator | 2025-05-14 14:59:41.570202 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-14 14:59:41.570211 | orchestrator | Wednesday 14 May 2025 14:55:14 +0000 (0:00:17.072) 0:03:59.113 ********* 2025-05-14 14:59:41.570226 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.570236 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.570245 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.570254 | orchestrator | 2025-05-14 14:59:41.570264 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-14 14:59:41.570273 | orchestrator | 2025-05-14 14:59:41.570282 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 14:59:41.570292 | orchestrator | Wednesday 14 May 2025 14:55:22 +0000 (0:00:08.765) 0:04:07.878 ********* 2025-05-14 14:59:41.570301 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.570313 | orchestrator | 2025-05-14 14:59:41.570322 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 14:59:41.570332 | orchestrator | Wednesday 14 May 2025 14:55:24 +0000 (0:00:01.443) 0:04:09.322 ********* 2025-05-14 14:59:41.570341 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.570351 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.570360 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.570377 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.570386 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.570396 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.570405 | orchestrator | 2025-05-14 14:59:41.570414 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-14 14:59:41.570424 | orchestrator | Wednesday 14 May 2025 14:55:25 +0000 (0:00:00.780) 0:04:10.102 ********* 2025-05-14 14:59:41.570433 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.570443 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.570452 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.570461 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:59:41.570471 | orchestrator | 2025-05-14 14:59:41.570481 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 14:59:41.570490 | orchestrator | Wednesday 14 May 2025 14:55:26 +0000 (0:00:01.095) 0:04:11.198 ********* 2025-05-14 14:59:41.570500 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-14 14:59:41.570509 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-14 14:59:41.570518 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-14 14:59:41.570528 | orchestrator | 2025-05-14 14:59:41.570537 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 14:59:41.570547 | orchestrator | Wednesday 14 May 2025 14:55:27 +0000 (0:00:00.839) 0:04:12.038 ********* 2025-05-14 14:59:41.570556 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-14 14:59:41.570566 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-14 14:59:41.570575 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-14 14:59:41.570584 | orchestrator | 2025-05-14 14:59:41.570593 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 14:59:41.570603 | orchestrator | Wednesday 14 May 2025 14:55:28 +0000 (0:00:01.374) 0:04:13.412 ********* 2025-05-14 14:59:41.570613 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-14 14:59:41.570622 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.570632 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-14 14:59:41.570641 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.570650 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-14 14:59:41.570660 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.570669 | orchestrator | 2025-05-14 14:59:41.570679 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-14 14:59:41.570688 | orchestrator | Wednesday 14 May 2025 14:55:29 +0000 (0:00:00.644) 0:04:14.056 ********* 2025-05-14 14:59:41.570698 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 14:59:41.570708 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 14:59:41.570717 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.570726 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 14:59:41.570736 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 14:59:41.570745 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 14:59:41.570755 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 14:59:41.570764 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.570773 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 14:59:41.570783 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 14:59:41.570792 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.570802 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 14:59:41.570811 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 14:59:41.570828 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 14:59:41.570838 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 14:59:41.570847 | orchestrator | 2025-05-14 14:59:41.570862 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-14 14:59:41.570872 | orchestrator | Wednesday 14 May 2025 14:55:31 +0000 (0:00:02.274) 0:04:16.330 ********* 2025-05-14 14:59:41.570882 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.570891 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.570900 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.570910 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.570919 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.570929 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.570938 | orchestrator | 2025-05-14 14:59:41.570951 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-14 14:59:41.570961 | orchestrator | Wednesday 14 May 2025 14:55:32 +0000 (0:00:01.117) 0:04:17.448 ********* 2025-05-14 14:59:41.570971 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.570980 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.570989 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.570998 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.571007 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.571017 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.571026 | orchestrator | 2025-05-14 14:59:41.571035 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 14:59:41.571045 | orchestrator | Wednesday 14 May 2025 14:55:34 +0000 (0:00:01.840) 0:04:19.288 ********* 2025-05-14 14:59:41.571056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.571068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.571106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.572260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.572272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.572341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.572368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.572399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.572439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.572652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.572697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.572848 | orchestrator | 2025-05-14 14:59:41.572859 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 14:59:41.572870 | orchestrator | Wednesday 14 May 2025 14:55:36 +0000 (0:00:02.424) 0:04:21.713 ********* 2025-05-14 14:59:41.572882 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 14:59:41.572895 | orchestrator | 2025-05-14 14:59:41.572906 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 14:59:41.572917 | orchestrator | Wednesday 14 May 2025 14:55:38 +0000 (0:00:01.398) 0:04:23.112 ********* 2025-05-14 14:59:41.572938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.572992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.573172 | orchestrator | 2025-05-14 14:59:41.573182 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 14:59:41.573191 | orchestrator | Wednesday 14 May 2025 14:55:41 +0000 (0:00:03.758) 0:04:26.870 ********* 2025-05-14 14:59:41.573202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573244 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.573254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573290 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.573300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573346 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.573357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573377 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.573387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573421 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.573437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573471 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.573481 | orchestrator | 2025-05-14 14:59:41.573491 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 14:59:41.573501 | orchestrator | Wednesday 14 May 2025 14:55:43 +0000 (0:00:01.739) 0:04:28.609 ********* 2025-05-14 14:59:41.573511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573553 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.573568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573609 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.573619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.573629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.573639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573649 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.573670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573697 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.573707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573727 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.573737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.573747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.573757 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.573767 | orchestrator | 2025-05-14 14:59:41.573777 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 14:59:41.573786 | orchestrator | Wednesday 14 May 2025 14:55:46 +0000 (0:00:02.586) 0:04:31.196 ********* 2025-05-14 14:59:41.573796 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.573806 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.573820 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.573830 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 14:59:41.573840 | orchestrator | 2025-05-14 14:59:41.573850 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-14 14:59:41.573859 | orchestrator | Wednesday 14 May 2025 14:55:47 +0000 (0:00:01.152) 0:04:32.349 ********* 2025-05-14 14:59:41.573874 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:59:41.573884 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 14:59:41.573893 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 14:59:41.573902 | orchestrator | 2025-05-14 14:59:41.573912 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-14 14:59:41.573922 | orchestrator | Wednesday 14 May 2025 14:55:48 +0000 (0:00:00.791) 0:04:33.140 ********* 2025-05-14 14:59:41.573932 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:59:41.573941 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 14:59:41.573955 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 14:59:41.573964 | orchestrator | 2025-05-14 14:59:41.573974 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-14 14:59:41.573983 | orchestrator | Wednesday 14 May 2025 14:55:49 +0000 (0:00:00.809) 0:04:33.950 ********* 2025-05-14 14:59:41.573992 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:59:41.574002 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:59:41.574011 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:59:41.574062 | orchestrator | 2025-05-14 14:59:41.574072 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-14 14:59:41.574100 | orchestrator | Wednesday 14 May 2025 14:55:49 +0000 (0:00:00.646) 0:04:34.597 ********* 2025-05-14 14:59:41.574111 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:59:41.574121 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:59:41.574131 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:59:41.574141 | orchestrator | 2025-05-14 14:59:41.574151 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-14 14:59:41.574160 | orchestrator | Wednesday 14 May 2025 14:55:50 +0000 (0:00:00.472) 0:04:35.069 ********* 2025-05-14 14:59:41.574170 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 14:59:41.574180 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 14:59:41.574189 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 14:59:41.574198 | orchestrator | 2025-05-14 14:59:41.574208 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-14 14:59:41.574217 | orchestrator | Wednesday 14 May 2025 14:55:51 +0000 (0:00:01.564) 0:04:36.634 ********* 2025-05-14 14:59:41.574227 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 14:59:41.574236 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 14:59:41.574246 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 14:59:41.574256 | orchestrator | 2025-05-14 14:59:41.574265 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-14 14:59:41.574275 | orchestrator | Wednesday 14 May 2025 14:55:53 +0000 (0:00:01.469) 0:04:38.104 ********* 2025-05-14 14:59:41.574284 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 14:59:41.574294 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 14:59:41.574303 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 14:59:41.574312 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-14 14:59:41.574322 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-14 14:59:41.574331 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-14 14:59:41.574341 | orchestrator | 2025-05-14 14:59:41.574350 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-14 14:59:41.574360 | orchestrator | Wednesday 14 May 2025 14:55:58 +0000 (0:00:05.276) 0:04:43.380 ********* 2025-05-14 14:59:41.574376 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.574386 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.574395 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.574405 | orchestrator | 2025-05-14 14:59:41.574414 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-14 14:59:41.574424 | orchestrator | Wednesday 14 May 2025 14:55:58 +0000 (0:00:00.458) 0:04:43.839 ********* 2025-05-14 14:59:41.574434 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.574443 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.574452 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.574462 | orchestrator | 2025-05-14 14:59:41.574471 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-14 14:59:41.574481 | orchestrator | Wednesday 14 May 2025 14:55:59 +0000 (0:00:00.483) 0:04:44.322 ********* 2025-05-14 14:59:41.574491 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.574501 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.574510 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.574520 | orchestrator | 2025-05-14 14:59:41.574529 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-14 14:59:41.574539 | orchestrator | Wednesday 14 May 2025 14:56:00 +0000 (0:00:01.368) 0:04:45.691 ********* 2025-05-14 14:59:41.574549 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 14:59:41.574559 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 14:59:41.574569 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 14:59:41.574579 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 14:59:41.574589 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 14:59:41.574599 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 14:59:41.574608 | orchestrator | 2025-05-14 14:59:41.574618 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-14 14:59:41.574644 | orchestrator | Wednesday 14 May 2025 14:56:04 +0000 (0:00:03.514) 0:04:49.205 ********* 2025-05-14 14:59:41.574655 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:59:41.574664 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:59:41.574674 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:59:41.574683 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 14:59:41.574693 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.574702 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 14:59:41.574717 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.574727 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 14:59:41.574737 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.574746 | orchestrator | 2025-05-14 14:59:41.574755 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-14 14:59:41.574765 | orchestrator | Wednesday 14 May 2025 14:56:07 +0000 (0:00:03.426) 0:04:52.632 ********* 2025-05-14 14:59:41.574774 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.574784 | orchestrator | 2025-05-14 14:59:41.574793 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-14 14:59:41.574802 | orchestrator | Wednesday 14 May 2025 14:56:07 +0000 (0:00:00.124) 0:04:52.757 ********* 2025-05-14 14:59:41.574812 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.574821 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.574830 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.574847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.574856 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.574866 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.574875 | orchestrator | 2025-05-14 14:59:41.574884 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-14 14:59:41.574894 | orchestrator | Wednesday 14 May 2025 14:56:08 +0000 (0:00:00.883) 0:04:53.640 ********* 2025-05-14 14:59:41.574903 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 14:59:41.574913 | orchestrator | 2025-05-14 14:59:41.574922 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-14 14:59:41.574963 | orchestrator | Wednesday 14 May 2025 14:56:09 +0000 (0:00:00.404) 0:04:54.044 ********* 2025-05-14 14:59:41.574974 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.574983 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.574994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.575004 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.575014 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.575025 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.575035 | orchestrator | 2025-05-14 14:59:41.575045 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-14 14:59:41.575055 | orchestrator | Wednesday 14 May 2025 14:56:09 +0000 (0:00:00.591) 0:04:54.635 ********* 2025-05-14 14:59:41.575067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.575707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575716 | orchestrator | 2025-05-14 14:59:41.575740 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-14 14:59:41.575748 | orchestrator | Wednesday 14 May 2025 14:56:13 +0000 (0:00:03.642) 0:04:58.278 ********* 2025-05-14 14:59:41.575756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.575934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.575943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.575960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.575978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.575986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.576000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.576008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.576025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.576125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.576133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.576155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.576168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.576189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.576197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.576256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.576294 | orchestrator | 2025-05-14 14:59:41.576302 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-14 14:59:41.576310 | orchestrator | Wednesday 14 May 2025 14:56:19 +0000 (0:00:06.171) 0:05:04.450 ********* 2025-05-14 14:59:41.576318 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.576326 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.576334 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.576342 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.576350 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.576357 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.576365 | orchestrator | 2025-05-14 14:59:41.576373 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-14 14:59:41.576381 | orchestrator | Wednesday 14 May 2025 14:56:21 +0000 (0:00:01.727) 0:05:06.178 ********* 2025-05-14 14:59:41.576389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 14:59:41.576397 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 14:59:41.576405 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 14:59:41.576413 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 14:59:41.576421 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.576516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 14:59:41.576527 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.576535 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 14:59:41.576542 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 14:59:41.576550 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.576562 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 14:59:41.576570 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 14:59:41.576578 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 14:59:41.576586 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 14:59:41.576594 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 14:59:41.576607 | orchestrator | 2025-05-14 14:59:41.576615 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-14 14:59:41.576624 | orchestrator | Wednesday 14 May 2025 14:56:26 +0000 (0:00:05.589) 0:05:11.767 ********* 2025-05-14 14:59:41.576631 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.576639 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.576647 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.576655 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.576663 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.576671 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.576678 | orchestrator | 2025-05-14 14:59:41.576686 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-14 14:59:41.576694 | orchestrator | Wednesday 14 May 2025 14:56:27 +0000 (0:00:00.931) 0:05:12.698 ********* 2025-05-14 14:59:41.576702 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 14:59:41.576711 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 14:59:41.576719 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 14:59:41.576727 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 14:59:41.576735 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 14:59:41.576742 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576750 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576758 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576766 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 14:59:41.576774 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576781 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.576789 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576797 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.576805 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 14:59:41.576813 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.576821 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576828 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576836 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576844 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576852 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 14:59:41.576868 | orchestrator | 2025-05-14 14:59:41.576876 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-14 14:59:41.576884 | orchestrator | Wednesday 14 May 2025 14:56:35 +0000 (0:00:07.501) 0:05:20.199 ********* 2025-05-14 14:59:41.576891 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:59:41.576904 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:59:41.576933 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 14:59:41.576943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 14:59:41.576950 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 14:59:41.576958 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 14:59:41.576972 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:59:41.576980 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:59:41.576988 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:59:41.576995 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:59:41.577003 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 14:59:41.577010 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 14:59:41.577018 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 14:59:41.577026 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.577034 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 14:59:41.577041 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.577049 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 14:59:41.577057 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.577065 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:59:41.577073 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:59:41.577125 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 14:59:41.577134 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:59:41.577142 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:59:41.577149 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 14:59:41.577157 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:59:41.577165 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:59:41.577173 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 14:59:41.577180 | orchestrator | 2025-05-14 14:59:41.577188 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-14 14:59:41.577196 | orchestrator | Wednesday 14 May 2025 14:56:45 +0000 (0:00:10.124) 0:05:30.323 ********* 2025-05-14 14:59:41.577204 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.577212 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.577219 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.577227 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.577235 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.577242 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.577250 | orchestrator | 2025-05-14 14:59:41.577258 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-14 14:59:41.577265 | orchestrator | Wednesday 14 May 2025 14:56:46 +0000 (0:00:00.742) 0:05:31.066 ********* 2025-05-14 14:59:41.577273 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.577281 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.577294 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.577302 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.577310 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.577317 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.577325 | orchestrator | 2025-05-14 14:59:41.577333 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-14 14:59:41.577341 | orchestrator | Wednesday 14 May 2025 14:56:47 +0000 (0:00:00.873) 0:05:31.939 ********* 2025-05-14 14:59:41.577348 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.577356 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.577363 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.577371 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.577378 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.577386 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.577392 | orchestrator | 2025-05-14 14:59:41.577399 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-14 14:59:41.577406 | orchestrator | Wednesday 14 May 2025 14:56:49 +0000 (0:00:02.743) 0:05:34.683 ********* 2025-05-14 14:59:41.577436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577529 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.577537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577673 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.577684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577720 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.577727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.577809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577887 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.577894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.577901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.577908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.577927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.577937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.577975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.577982 | orchestrator | 2025-05-14 14:59:41.577989 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-14 14:59:41.578004 | orchestrator | Wednesday 14 May 2025 14:56:51 +0000 (0:00:01.941) 0:05:36.625 ********* 2025-05-14 14:59:41.578011 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 14:59:41.578039 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578046 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.578052 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 14:59:41.578059 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578065 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.578072 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 14:59:41.578091 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578098 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.578104 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 14:59:41.578111 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578117 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.578123 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 14:59:41.578130 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578136 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.578143 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 14:59:41.578149 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 14:59:41.578156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.578162 | orchestrator | 2025-05-14 14:59:41.578169 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-14 14:59:41.578175 | orchestrator | Wednesday 14 May 2025 14:56:52 +0000 (0:00:00.995) 0:05:37.620 ********* 2025-05-14 14:59:41.578187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.578214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.578221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.578228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 14:59:41.578239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.578254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 14:59:41.578261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 14:59:41.578573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 14:59:41.578584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 14:59:41.578728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 14:59:41.578742 | orchestrator | 2025-05-14 14:59:41.578749 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 14:59:41.578756 | orchestrator | Wednesday 14 May 2025 14:56:56 +0000 (0:00:03.452) 0:05:41.072 ********* 2025-05-14 14:59:41.578762 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.578769 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.578775 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.578782 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.578788 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.578795 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.578806 | orchestrator | 2025-05-14 14:59:41.578812 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578819 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.918) 0:05:41.991 ********* 2025-05-14 14:59:41.578826 | orchestrator | 2025-05-14 14:59:41.578832 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578839 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.108) 0:05:42.099 ********* 2025-05-14 14:59:41.578846 | orchestrator | 2025-05-14 14:59:41.578852 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578859 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.252) 0:05:42.352 ********* 2025-05-14 14:59:41.578865 | orchestrator | 2025-05-14 14:59:41.578872 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578878 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.109) 0:05:42.461 ********* 2025-05-14 14:59:41.578885 | orchestrator | 2025-05-14 14:59:41.578891 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578898 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.284) 0:05:42.745 ********* 2025-05-14 14:59:41.578904 | orchestrator | 2025-05-14 14:59:41.578911 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 14:59:41.578917 | orchestrator | Wednesday 14 May 2025 14:56:57 +0000 (0:00:00.108) 0:05:42.853 ********* 2025-05-14 14:59:41.578924 | orchestrator | 2025-05-14 14:59:41.578930 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-14 14:59:41.578937 | orchestrator | Wednesday 14 May 2025 14:56:58 +0000 (0:00:00.301) 0:05:43.155 ********* 2025-05-14 14:59:41.578943 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.578950 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.578956 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.578963 | orchestrator | 2025-05-14 14:59:41.578969 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-14 14:59:41.578976 | orchestrator | Wednesday 14 May 2025 14:57:10 +0000 (0:00:12.156) 0:05:55.311 ********* 2025-05-14 14:59:41.578982 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.578989 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.578995 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.579002 | orchestrator | 2025-05-14 14:59:41.579008 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-14 14:59:41.579015 | orchestrator | Wednesday 14 May 2025 14:57:20 +0000 (0:00:10.128) 0:06:05.440 ********* 2025-05-14 14:59:41.579024 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.579031 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.579038 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.579044 | orchestrator | 2025-05-14 14:59:41.579051 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-14 14:59:41.579057 | orchestrator | Wednesday 14 May 2025 14:57:42 +0000 (0:00:22.144) 0:06:27.585 ********* 2025-05-14 14:59:41.579064 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.579070 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.579094 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.579102 | orchestrator | 2025-05-14 14:59:41.579108 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-14 14:59:41.579115 | orchestrator | Wednesday 14 May 2025 14:58:10 +0000 (0:00:27.891) 0:06:55.477 ********* 2025-05-14 14:59:41.579121 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.579128 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.579134 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.579141 | orchestrator | 2025-05-14 14:59:41.579147 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-14 14:59:41.579154 | orchestrator | Wednesday 14 May 2025 14:58:11 +0000 (0:00:00.848) 0:06:56.325 ********* 2025-05-14 14:59:41.579161 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.579167 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.579178 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.579185 | orchestrator | 2025-05-14 14:59:41.579191 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-14 14:59:41.579198 | orchestrator | Wednesday 14 May 2025 14:58:12 +0000 (0:00:00.939) 0:06:57.265 ********* 2025-05-14 14:59:41.579204 | orchestrator | changed: [testbed-node-5] 2025-05-14 14:59:41.579211 | orchestrator | changed: [testbed-node-4] 2025-05-14 14:59:41.579217 | orchestrator | changed: [testbed-node-3] 2025-05-14 14:59:41.579224 | orchestrator | 2025-05-14 14:59:41.579230 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-14 14:59:41.579237 | orchestrator | Wednesday 14 May 2025 14:58:33 +0000 (0:00:21.183) 0:07:18.448 ********* 2025-05-14 14:59:41.579243 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.579250 | orchestrator | 2025-05-14 14:59:41.579256 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-14 14:59:41.579263 | orchestrator | Wednesday 14 May 2025 14:58:33 +0000 (0:00:00.140) 0:07:18.589 ********* 2025-05-14 14:59:41.579269 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.579275 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.579282 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.579288 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.579295 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.579302 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-14 14:59:41.579308 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:59:41.579315 | orchestrator | 2025-05-14 14:59:41.579321 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-14 14:59:41.579328 | orchestrator | Wednesday 14 May 2025 14:58:55 +0000 (0:00:22.181) 0:07:40.770 ********* 2025-05-14 14:59:41.579335 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.579341 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.579347 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.579354 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.579360 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.579367 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.579373 | orchestrator | 2025-05-14 14:59:41.579380 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-14 14:59:41.579386 | orchestrator | Wednesday 14 May 2025 14:59:05 +0000 (0:00:09.304) 0:07:50.074 ********* 2025-05-14 14:59:41.579393 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.579399 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.579406 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.579412 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.579419 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.579425 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-05-14 14:59:41.579432 | orchestrator | 2025-05-14 14:59:41.579438 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 14:59:41.579445 | orchestrator | Wednesday 14 May 2025 14:59:08 +0000 (0:00:03.047) 0:07:53.122 ********* 2025-05-14 14:59:41.579451 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:59:41.579458 | orchestrator | 2025-05-14 14:59:41.579464 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 14:59:41.579471 | orchestrator | Wednesday 14 May 2025 14:59:19 +0000 (0:00:11.672) 0:08:04.794 ********* 2025-05-14 14:59:41.579477 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:59:41.579484 | orchestrator | 2025-05-14 14:59:41.579490 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-14 14:59:41.579497 | orchestrator | Wednesday 14 May 2025 14:59:20 +0000 (0:00:01.099) 0:08:05.894 ********* 2025-05-14 14:59:41.579503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.579514 | orchestrator | 2025-05-14 14:59:41.579521 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-14 14:59:41.579527 | orchestrator | Wednesday 14 May 2025 14:59:22 +0000 (0:00:01.064) 0:08:06.958 ********* 2025-05-14 14:59:41.579534 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 14:59:41.579540 | orchestrator | 2025-05-14 14:59:41.579547 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-14 14:59:41.579553 | orchestrator | Wednesday 14 May 2025 14:59:32 +0000 (0:00:10.199) 0:08:17.158 ********* 2025-05-14 14:59:41.579560 | orchestrator | ok: [testbed-node-3] 2025-05-14 14:59:41.579566 | orchestrator | ok: [testbed-node-4] 2025-05-14 14:59:41.579573 | orchestrator | ok: [testbed-node-5] 2025-05-14 14:59:41.579580 | orchestrator | ok: [testbed-node-0] 2025-05-14 14:59:41.579586 | orchestrator | ok: [testbed-node-1] 2025-05-14 14:59:41.579593 | orchestrator | ok: [testbed-node-2] 2025-05-14 14:59:41.579599 | orchestrator | 2025-05-14 14:59:41.579609 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-14 14:59:41.579616 | orchestrator | 2025-05-14 14:59:41.579622 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-14 14:59:41.579629 | orchestrator | Wednesday 14 May 2025 14:59:34 +0000 (0:00:02.111) 0:08:19.270 ********* 2025-05-14 14:59:41.579636 | orchestrator | changed: [testbed-node-0] 2025-05-14 14:59:41.579642 | orchestrator | changed: [testbed-node-1] 2025-05-14 14:59:41.579649 | orchestrator | changed: [testbed-node-2] 2025-05-14 14:59:41.579655 | orchestrator | 2025-05-14 14:59:41.579665 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-14 14:59:41.579671 | orchestrator | 2025-05-14 14:59:41.579678 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-14 14:59:41.579684 | orchestrator | Wednesday 14 May 2025 14:59:35 +0000 (0:00:01.021) 0:08:20.292 ********* 2025-05-14 14:59:41.579691 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.579697 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.579704 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.579710 | orchestrator | 2025-05-14 14:59:41.579717 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-14 14:59:41.579723 | orchestrator | 2025-05-14 14:59:41.579730 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-14 14:59:41.579736 | orchestrator | Wednesday 14 May 2025 14:59:36 +0000 (0:00:00.735) 0:08:21.027 ********* 2025-05-14 14:59:41.579743 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-14 14:59:41.579749 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 14:59:41.579756 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579762 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-14 14:59:41.579769 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-14 14:59:41.579775 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.579782 | orchestrator | skipping: [testbed-node-3] 2025-05-14 14:59:41.579788 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-14 14:59:41.579794 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 14:59:41.579801 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579807 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-14 14:59:41.579814 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-14 14:59:41.579820 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.579827 | orchestrator | skipping: [testbed-node-4] 2025-05-14 14:59:41.579833 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-14 14:59:41.579839 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 14:59:41.579846 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579853 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-14 14:59:41.579863 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-14 14:59:41.579870 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.579876 | orchestrator | skipping: [testbed-node-5] 2025-05-14 14:59:41.579883 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-14 14:59:41.579889 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 14:59:41.579895 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579902 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-14 14:59:41.579908 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-14 14:59:41.579915 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.579921 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.579927 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-14 14:59:41.579934 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 14:59:41.579940 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579947 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-14 14:59:41.579953 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-14 14:59:41.579960 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.579966 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.579973 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-14 14:59:41.579979 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 14:59:41.579986 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 14:59:41.579992 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-14 14:59:41.579998 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-14 14:59:41.580005 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-14 14:59:41.580012 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.580018 | orchestrator | 2025-05-14 14:59:41.580025 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-14 14:59:41.580031 | orchestrator | 2025-05-14 14:59:41.580038 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-14 14:59:41.580044 | orchestrator | Wednesday 14 May 2025 14:59:37 +0000 (0:00:01.310) 0:08:22.338 ********* 2025-05-14 14:59:41.580051 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-14 14:59:41.580057 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-14 14:59:41.580064 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.580070 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-14 14:59:41.580113 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-14 14:59:41.580121 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.580132 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-14 14:59:41.580139 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-14 14:59:41.580145 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.580152 | orchestrator | 2025-05-14 14:59:41.580158 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-14 14:59:41.580165 | orchestrator | 2025-05-14 14:59:41.580171 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-14 14:59:41.580182 | orchestrator | Wednesday 14 May 2025 14:59:38 +0000 (0:00:00.768) 0:08:23.106 ********* 2025-05-14 14:59:41.580188 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.580195 | orchestrator | 2025-05-14 14:59:41.580202 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-14 14:59:41.580208 | orchestrator | 2025-05-14 14:59:41.580215 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-14 14:59:41.580226 | orchestrator | Wednesday 14 May 2025 14:59:39 +0000 (0:00:00.881) 0:08:23.987 ********* 2025-05-14 14:59:41.580233 | orchestrator | skipping: [testbed-node-0] 2025-05-14 14:59:41.580240 | orchestrator | skipping: [testbed-node-1] 2025-05-14 14:59:41.580246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 14:59:41.580253 | orchestrator | 2025-05-14 14:59:41.580259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 14:59:41.580266 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 14:59:41.580273 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-14 14:59:41.580280 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 14:59:41.580287 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 14:59:41.580293 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-14 14:59:41.580300 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-14 14:59:41.580306 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-14 14:59:41.580313 | orchestrator | 2025-05-14 14:59:41.580319 | orchestrator | 2025-05-14 14:59:41.580326 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 14:59:41.580333 | orchestrator | Wednesday 14 May 2025 14:59:39 +0000 (0:00:00.542) 0:08:24.529 ********* 2025-05-14 14:59:41.580339 | orchestrator | =============================================================================== 2025-05-14 14:59:41.580346 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.44s 2025-05-14 14:59:41.580352 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.89s 2025-05-14 14:59:41.580359 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.18s 2025-05-14 14:59:41.580365 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.14s 2025-05-14 14:59:41.580372 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.41s 2025-05-14 14:59:41.580379 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.18s 2025-05-14 14:59:41.580385 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.07s 2025-05-14 14:59:41.580392 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.15s 2025-05-14 14:59:41.580399 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.99s 2025-05-14 14:59:41.580405 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.16s 2025-05-14 14:59:41.580412 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.67s 2025-05-14 14:59:41.580418 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.58s 2025-05-14 14:59:41.580425 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.26s 2025-05-14 14:59:41.580431 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.19s 2025-05-14 14:59:41.580438 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.20s 2025-05-14 14:59:41.580444 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 10.13s 2025-05-14 14:59:41.580451 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.12s 2025-05-14 14:59:41.580457 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.09s 2025-05-14 14:59:41.580468 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.30s 2025-05-14 14:59:41.580474 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.77s 2025-05-14 14:59:41.580481 | orchestrator | 2025-05-14 14:59:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:41.580488 | orchestrator | 2025-05-14 14:59:41 | INFO  | Task 1b92cc7c-9434-4cad-803d-52db3533f59e is in state STARTED 2025-05-14 14:59:41.580498 | orchestrator | 2025-05-14 14:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:44.626785 | orchestrator | 2025-05-14 14:59:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:44.627767 | orchestrator | 2025-05-14 14:59:44 | INFO  | Task 1b92cc7c-9434-4cad-803d-52db3533f59e is in state STARTED 2025-05-14 14:59:44.628148 | orchestrator | 2025-05-14 14:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:47.674778 | orchestrator | 2025-05-14 14:59:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:47.675378 | orchestrator | 2025-05-14 14:59:47 | INFO  | Task 1b92cc7c-9434-4cad-803d-52db3533f59e is in state SUCCESS 2025-05-14 14:59:47.675409 | orchestrator | 2025-05-14 14:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:50.719788 | orchestrator | 2025-05-14 14:59:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:50.719896 | orchestrator | 2025-05-14 14:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:53.761323 | orchestrator | 2025-05-14 14:59:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:53.761425 | orchestrator | 2025-05-14 14:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:56.807301 | orchestrator | 2025-05-14 14:59:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:56.807414 | orchestrator | 2025-05-14 14:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 14:59:59.858275 | orchestrator | 2025-05-14 14:59:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 14:59:59.858383 | orchestrator | 2025-05-14 14:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:02.902375 | orchestrator | 2025-05-14 15:00:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:02.902484 | orchestrator | 2025-05-14 15:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:05.952811 | orchestrator | 2025-05-14 15:00:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:05.952915 | orchestrator | 2025-05-14 15:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:09.006151 | orchestrator | 2025-05-14 15:00:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:09.006283 | orchestrator | 2025-05-14 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:12.055830 | orchestrator | 2025-05-14 15:00:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:12.055936 | orchestrator | 2025-05-14 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:15.104654 | orchestrator | 2025-05-14 15:00:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:15.104760 | orchestrator | 2025-05-14 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:18.156332 | orchestrator | 2025-05-14 15:00:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:18.156488 | orchestrator | 2025-05-14 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:21.212346 | orchestrator | 2025-05-14 15:00:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:21.212452 | orchestrator | 2025-05-14 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:24.260942 | orchestrator | 2025-05-14 15:00:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:24.261071 | orchestrator | 2025-05-14 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:27.310925 | orchestrator | 2025-05-14 15:00:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:27.310997 | orchestrator | 2025-05-14 15:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:30.357744 | orchestrator | 2025-05-14 15:00:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:30.357871 | orchestrator | 2025-05-14 15:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:33.409470 | orchestrator | 2025-05-14 15:00:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:33.409580 | orchestrator | 2025-05-14 15:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:36.455509 | orchestrator | 2025-05-14 15:00:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:36.455641 | orchestrator | 2025-05-14 15:00:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:39.500826 | orchestrator | 2025-05-14 15:00:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:39.500957 | orchestrator | 2025-05-14 15:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:42.546207 | orchestrator | 2025-05-14 15:00:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:42.546312 | orchestrator | 2025-05-14 15:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:45.597606 | orchestrator | 2025-05-14 15:00:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:45.597711 | orchestrator | 2025-05-14 15:00:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:48.646457 | orchestrator | 2025-05-14 15:00:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:48.646572 | orchestrator | 2025-05-14 15:00:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:51.695635 | orchestrator | 2025-05-14 15:00:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:51.695741 | orchestrator | 2025-05-14 15:00:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:54.745578 | orchestrator | 2025-05-14 15:00:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:54.745684 | orchestrator | 2025-05-14 15:00:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:00:57.793429 | orchestrator | 2025-05-14 15:00:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:00:57.793557 | orchestrator | 2025-05-14 15:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:00.840238 | orchestrator | 2025-05-14 15:01:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:00.840326 | orchestrator | 2025-05-14 15:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:03.889814 | orchestrator | 2025-05-14 15:01:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:03.889954 | orchestrator | 2025-05-14 15:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:06.940856 | orchestrator | 2025-05-14 15:01:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:06.940961 | orchestrator | 2025-05-14 15:01:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:09.988983 | orchestrator | 2025-05-14 15:01:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:09.989090 | orchestrator | 2025-05-14 15:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:13.030951 | orchestrator | 2025-05-14 15:01:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:13.031056 | orchestrator | 2025-05-14 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:16.082916 | orchestrator | 2025-05-14 15:01:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:16.083027 | orchestrator | 2025-05-14 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:19.129481 | orchestrator | 2025-05-14 15:01:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:19.129587 | orchestrator | 2025-05-14 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:22.180283 | orchestrator | 2025-05-14 15:01:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:22.180416 | orchestrator | 2025-05-14 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:25.231894 | orchestrator | 2025-05-14 15:01:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:25.231986 | orchestrator | 2025-05-14 15:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:28.278752 | orchestrator | 2025-05-14 15:01:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:28.278855 | orchestrator | 2025-05-14 15:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:31.331563 | orchestrator | 2025-05-14 15:01:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:31.331667 | orchestrator | 2025-05-14 15:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:34.381017 | orchestrator | 2025-05-14 15:01:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:34.381123 | orchestrator | 2025-05-14 15:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:37.428634 | orchestrator | 2025-05-14 15:01:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:37.428734 | orchestrator | 2025-05-14 15:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:40.477637 | orchestrator | 2025-05-14 15:01:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:40.477741 | orchestrator | 2025-05-14 15:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:43.528086 | orchestrator | 2025-05-14 15:01:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:43.528165 | orchestrator | 2025-05-14 15:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:46.583220 | orchestrator | 2025-05-14 15:01:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:46.583377 | orchestrator | 2025-05-14 15:01:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:49.628798 | orchestrator | 2025-05-14 15:01:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:49.628929 | orchestrator | 2025-05-14 15:01:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:52.677939 | orchestrator | 2025-05-14 15:01:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:52.678136 | orchestrator | 2025-05-14 15:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:55.728427 | orchestrator | 2025-05-14 15:01:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:55.728499 | orchestrator | 2025-05-14 15:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:01:58.767889 | orchestrator | 2025-05-14 15:01:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:01:58.767992 | orchestrator | 2025-05-14 15:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:01.826117 | orchestrator | 2025-05-14 15:02:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:01.826222 | orchestrator | 2025-05-14 15:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:04.868644 | orchestrator | 2025-05-14 15:02:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:04.868748 | orchestrator | 2025-05-14 15:02:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:07.911462 | orchestrator | 2025-05-14 15:02:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:07.911563 | orchestrator | 2025-05-14 15:02:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:10.957106 | orchestrator | 2025-05-14 15:02:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:10.957216 | orchestrator | 2025-05-14 15:02:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:14.005516 | orchestrator | 2025-05-14 15:02:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:14.005622 | orchestrator | 2025-05-14 15:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:17.054170 | orchestrator | 2025-05-14 15:02:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:17.054331 | orchestrator | 2025-05-14 15:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:20.097249 | orchestrator | 2025-05-14 15:02:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:20.097433 | orchestrator | 2025-05-14 15:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:23.135778 | orchestrator | 2025-05-14 15:02:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:23.135883 | orchestrator | 2025-05-14 15:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:26.184171 | orchestrator | 2025-05-14 15:02:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:26.184325 | orchestrator | 2025-05-14 15:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:29.228026 | orchestrator | 2025-05-14 15:02:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:29.228154 | orchestrator | 2025-05-14 15:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:32.287928 | orchestrator | 2025-05-14 15:02:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:32.288031 | orchestrator | 2025-05-14 15:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:35.332319 | orchestrator | 2025-05-14 15:02:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:35.332463 | orchestrator | 2025-05-14 15:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:38.379353 | orchestrator | 2025-05-14 15:02:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:38.379457 | orchestrator | 2025-05-14 15:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:41.432833 | orchestrator | 2025-05-14 15:02:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:41.432937 | orchestrator | 2025-05-14 15:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:44.480118 | orchestrator | 2025-05-14 15:02:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:44.480218 | orchestrator | 2025-05-14 15:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:47.534656 | orchestrator | 2025-05-14 15:02:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:47.534783 | orchestrator | 2025-05-14 15:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:50.584806 | orchestrator | 2025-05-14 15:02:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:50.584914 | orchestrator | 2025-05-14 15:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:53.635428 | orchestrator | 2025-05-14 15:02:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:53.635523 | orchestrator | 2025-05-14 15:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:56.697319 | orchestrator | 2025-05-14 15:02:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:56.697482 | orchestrator | 2025-05-14 15:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:02:59.746803 | orchestrator | 2025-05-14 15:02:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:02:59.746911 | orchestrator | 2025-05-14 15:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:02.800961 | orchestrator | 2025-05-14 15:03:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:02.801085 | orchestrator | 2025-05-14 15:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:05.852812 | orchestrator | 2025-05-14 15:03:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:05.852953 | orchestrator | 2025-05-14 15:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:08.900994 | orchestrator | 2025-05-14 15:03:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:08.901102 | orchestrator | 2025-05-14 15:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:11.945683 | orchestrator | 2025-05-14 15:03:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:11.945807 | orchestrator | 2025-05-14 15:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:15.004428 | orchestrator | 2025-05-14 15:03:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:15.004531 | orchestrator | 2025-05-14 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:18.054246 | orchestrator | 2025-05-14 15:03:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:18.054434 | orchestrator | 2025-05-14 15:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:21.099433 | orchestrator | 2025-05-14 15:03:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:21.099572 | orchestrator | 2025-05-14 15:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:24.141603 | orchestrator | 2025-05-14 15:03:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:24.141720 | orchestrator | 2025-05-14 15:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:27.187963 | orchestrator | 2025-05-14 15:03:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:27.188067 | orchestrator | 2025-05-14 15:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:30.230959 | orchestrator | 2025-05-14 15:03:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:30.231069 | orchestrator | 2025-05-14 15:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:33.302125 | orchestrator | 2025-05-14 15:03:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:33.302222 | orchestrator | 2025-05-14 15:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:36.355505 | orchestrator | 2025-05-14 15:03:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:36.355610 | orchestrator | 2025-05-14 15:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:39.407282 | orchestrator | 2025-05-14 15:03:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:39.407446 | orchestrator | 2025-05-14 15:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:42.456618 | orchestrator | 2025-05-14 15:03:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:42.456722 | orchestrator | 2025-05-14 15:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:45.512647 | orchestrator | 2025-05-14 15:03:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:45.512753 | orchestrator | 2025-05-14 15:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:48.560184 | orchestrator | 2025-05-14 15:03:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:48.560360 | orchestrator | 2025-05-14 15:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:51.598358 | orchestrator | 2025-05-14 15:03:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:51.598463 | orchestrator | 2025-05-14 15:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:54.645640 | orchestrator | 2025-05-14 15:03:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:54.645743 | orchestrator | 2025-05-14 15:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:03:57.697074 | orchestrator | 2025-05-14 15:03:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:03:57.697167 | orchestrator | 2025-05-14 15:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:00.750202 | orchestrator | 2025-05-14 15:04:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:00.750355 | orchestrator | 2025-05-14 15:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:03.804121 | orchestrator | 2025-05-14 15:04:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:03.804222 | orchestrator | 2025-05-14 15:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:06.846300 | orchestrator | 2025-05-14 15:04:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:06.846488 | orchestrator | 2025-05-14 15:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:09.893426 | orchestrator | 2025-05-14 15:04:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:09.893528 | orchestrator | 2025-05-14 15:04:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:12.947453 | orchestrator | 2025-05-14 15:04:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:12.947577 | orchestrator | 2025-05-14 15:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:15.992515 | orchestrator | 2025-05-14 15:04:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:15.992638 | orchestrator | 2025-05-14 15:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:19.054404 | orchestrator | 2025-05-14 15:04:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:19.054568 | orchestrator | 2025-05-14 15:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:22.105175 | orchestrator | 2025-05-14 15:04:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:22.105302 | orchestrator | 2025-05-14 15:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:25.152693 | orchestrator | 2025-05-14 15:04:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:25.152799 | orchestrator | 2025-05-14 15:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:28.199549 | orchestrator | 2025-05-14 15:04:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:28.199661 | orchestrator | 2025-05-14 15:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:31.247207 | orchestrator | 2025-05-14 15:04:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:31.247397 | orchestrator | 2025-05-14 15:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:34.309911 | orchestrator | 2025-05-14 15:04:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:34.310088 | orchestrator | 2025-05-14 15:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:37.353853 | orchestrator | 2025-05-14 15:04:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:37.355513 | orchestrator | 2025-05-14 15:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:40.398606 | orchestrator | 2025-05-14 15:04:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:40.398718 | orchestrator | 2025-05-14 15:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:43.448533 | orchestrator | 2025-05-14 15:04:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:43.448636 | orchestrator | 2025-05-14 15:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:46.484714 | orchestrator | 2025-05-14 15:04:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:46.484820 | orchestrator | 2025-05-14 15:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:49.530905 | orchestrator | 2025-05-14 15:04:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:49.531013 | orchestrator | 2025-05-14 15:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:52.570312 | orchestrator | 2025-05-14 15:04:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:52.570469 | orchestrator | 2025-05-14 15:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:55.619512 | orchestrator | 2025-05-14 15:04:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:55.619614 | orchestrator | 2025-05-14 15:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:04:58.667463 | orchestrator | 2025-05-14 15:04:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:04:58.667565 | orchestrator | 2025-05-14 15:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:01.709853 | orchestrator | 2025-05-14 15:05:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:01.709944 | orchestrator | 2025-05-14 15:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:04.759305 | orchestrator | 2025-05-14 15:05:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:04.759462 | orchestrator | 2025-05-14 15:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:07.810909 | orchestrator | 2025-05-14 15:05:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:07.811022 | orchestrator | 2025-05-14 15:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:10.859695 | orchestrator | 2025-05-14 15:05:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:10.859800 | orchestrator | 2025-05-14 15:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:13.918929 | orchestrator | 2025-05-14 15:05:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:13.919034 | orchestrator | 2025-05-14 15:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:16.967146 | orchestrator | 2025-05-14 15:05:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:16.967251 | orchestrator | 2025-05-14 15:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:20.015042 | orchestrator | 2025-05-14 15:05:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:20.015151 | orchestrator | 2025-05-14 15:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:23.069669 | orchestrator | 2025-05-14 15:05:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:23.069784 | orchestrator | 2025-05-14 15:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:26.109033 | orchestrator | 2025-05-14 15:05:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:26.109140 | orchestrator | 2025-05-14 15:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:29.154452 | orchestrator | 2025-05-14 15:05:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:29.154596 | orchestrator | 2025-05-14 15:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:32.203907 | orchestrator | 2025-05-14 15:05:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:32.204011 | orchestrator | 2025-05-14 15:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:35.258778 | orchestrator | 2025-05-14 15:05:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:35.258881 | orchestrator | 2025-05-14 15:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:38.313002 | orchestrator | 2025-05-14 15:05:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:38.313110 | orchestrator | 2025-05-14 15:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:41.367828 | orchestrator | 2025-05-14 15:05:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:41.367938 | orchestrator | 2025-05-14 15:05:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:44.416183 | orchestrator | 2025-05-14 15:05:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:44.416267 | orchestrator | 2025-05-14 15:05:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:47.466091 | orchestrator | 2025-05-14 15:05:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:47.466199 | orchestrator | 2025-05-14 15:05:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:50.513069 | orchestrator | 2025-05-14 15:05:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:50.513179 | orchestrator | 2025-05-14 15:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:53.561016 | orchestrator | 2025-05-14 15:05:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:53.561151 | orchestrator | 2025-05-14 15:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:56.609987 | orchestrator | 2025-05-14 15:05:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:56.610152 | orchestrator | 2025-05-14 15:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:05:59.661169 | orchestrator | 2025-05-14 15:05:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:05:59.661244 | orchestrator | 2025-05-14 15:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:02.716320 | orchestrator | 2025-05-14 15:06:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:02.716548 | orchestrator | 2025-05-14 15:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:05.762913 | orchestrator | 2025-05-14 15:06:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:05.763024 | orchestrator | 2025-05-14 15:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:08.811805 | orchestrator | 2025-05-14 15:06:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:08.811902 | orchestrator | 2025-05-14 15:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:11.859222 | orchestrator | 2025-05-14 15:06:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:11.859333 | orchestrator | 2025-05-14 15:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:14.907496 | orchestrator | 2025-05-14 15:06:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:14.907601 | orchestrator | 2025-05-14 15:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:17.952690 | orchestrator | 2025-05-14 15:06:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:17.952791 | orchestrator | 2025-05-14 15:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:21.001495 | orchestrator | 2025-05-14 15:06:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:21.001592 | orchestrator | 2025-05-14 15:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:24.054685 | orchestrator | 2025-05-14 15:06:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:24.054795 | orchestrator | 2025-05-14 15:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:27.098700 | orchestrator | 2025-05-14 15:06:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:27.098814 | orchestrator | 2025-05-14 15:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:30.149040 | orchestrator | 2025-05-14 15:06:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:30.149144 | orchestrator | 2025-05-14 15:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:33.194074 | orchestrator | 2025-05-14 15:06:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:33.194179 | orchestrator | 2025-05-14 15:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:36.242943 | orchestrator | 2025-05-14 15:06:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:36.243063 | orchestrator | 2025-05-14 15:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:39.298333 | orchestrator | 2025-05-14 15:06:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:39.298483 | orchestrator | 2025-05-14 15:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:42.349779 | orchestrator | 2025-05-14 15:06:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:42.349888 | orchestrator | 2025-05-14 15:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:45.410464 | orchestrator | 2025-05-14 15:06:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:45.410567 | orchestrator | 2025-05-14 15:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:48.457922 | orchestrator | 2025-05-14 15:06:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:48.458113 | orchestrator | 2025-05-14 15:06:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:51.506432 | orchestrator | 2025-05-14 15:06:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:51.506539 | orchestrator | 2025-05-14 15:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:54.557746 | orchestrator | 2025-05-14 15:06:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:54.557864 | orchestrator | 2025-05-14 15:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:06:57.614978 | orchestrator | 2025-05-14 15:06:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:06:57.615198 | orchestrator | 2025-05-14 15:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:00.665880 | orchestrator | 2025-05-14 15:07:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:00.665988 | orchestrator | 2025-05-14 15:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:03.709266 | orchestrator | 2025-05-14 15:07:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:03.709361 | orchestrator | 2025-05-14 15:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:06.754236 | orchestrator | 2025-05-14 15:07:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:06.754634 | orchestrator | 2025-05-14 15:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:09.798556 | orchestrator | 2025-05-14 15:07:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:09.798665 | orchestrator | 2025-05-14 15:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:12.843159 | orchestrator | 2025-05-14 15:07:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:12.843270 | orchestrator | 2025-05-14 15:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:15.889678 | orchestrator | 2025-05-14 15:07:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:15.889790 | orchestrator | 2025-05-14 15:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:18.937636 | orchestrator | 2025-05-14 15:07:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:18.937745 | orchestrator | 2025-05-14 15:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:21.980791 | orchestrator | 2025-05-14 15:07:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:21.980907 | orchestrator | 2025-05-14 15:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:25.038145 | orchestrator | 2025-05-14 15:07:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:25.038276 | orchestrator | 2025-05-14 15:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:28.085502 | orchestrator | 2025-05-14 15:07:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:28.085627 | orchestrator | 2025-05-14 15:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:31.138729 | orchestrator | 2025-05-14 15:07:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:31.138836 | orchestrator | 2025-05-14 15:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:34.183959 | orchestrator | 2025-05-14 15:07:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:34.183996 | orchestrator | 2025-05-14 15:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:37.233288 | orchestrator | 2025-05-14 15:07:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:37.233415 | orchestrator | 2025-05-14 15:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:40.284092 | orchestrator | 2025-05-14 15:07:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:40.284205 | orchestrator | 2025-05-14 15:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:43.326682 | orchestrator | 2025-05-14 15:07:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:43.326948 | orchestrator | 2025-05-14 15:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:46.377920 | orchestrator | 2025-05-14 15:07:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:46.378088 | orchestrator | 2025-05-14 15:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:49.426998 | orchestrator | 2025-05-14 15:07:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:49.427142 | orchestrator | 2025-05-14 15:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:52.482547 | orchestrator | 2025-05-14 15:07:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:52.482688 | orchestrator | 2025-05-14 15:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:55.532581 | orchestrator | 2025-05-14 15:07:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:55.532698 | orchestrator | 2025-05-14 15:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:07:58.579661 | orchestrator | 2025-05-14 15:07:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:07:58.579800 | orchestrator | 2025-05-14 15:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:01.633601 | orchestrator | 2025-05-14 15:08:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:01.633780 | orchestrator | 2025-05-14 15:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:04.682762 | orchestrator | 2025-05-14 15:08:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:04.682900 | orchestrator | 2025-05-14 15:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:07.733873 | orchestrator | 2025-05-14 15:08:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:07.733981 | orchestrator | 2025-05-14 15:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:10.774982 | orchestrator | 2025-05-14 15:08:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:10.775109 | orchestrator | 2025-05-14 15:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:13.822446 | orchestrator | 2025-05-14 15:08:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:13.822567 | orchestrator | 2025-05-14 15:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:16.876890 | orchestrator | 2025-05-14 15:08:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:16.877004 | orchestrator | 2025-05-14 15:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:19.928154 | orchestrator | 2025-05-14 15:08:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:19.928284 | orchestrator | 2025-05-14 15:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:22.974115 | orchestrator | 2025-05-14 15:08:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:22.974248 | orchestrator | 2025-05-14 15:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:26.023314 | orchestrator | 2025-05-14 15:08:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:26.023516 | orchestrator | 2025-05-14 15:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:29.064954 | orchestrator | 2025-05-14 15:08:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:29.065093 | orchestrator | 2025-05-14 15:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:32.117478 | orchestrator | 2025-05-14 15:08:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:32.117579 | orchestrator | 2025-05-14 15:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:35.168730 | orchestrator | 2025-05-14 15:08:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:35.168829 | orchestrator | 2025-05-14 15:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:38.223955 | orchestrator | 2025-05-14 15:08:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:38.224061 | orchestrator | 2025-05-14 15:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:41.266791 | orchestrator | 2025-05-14 15:08:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:41.266904 | orchestrator | 2025-05-14 15:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:44.311660 | orchestrator | 2025-05-14 15:08:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:44.311755 | orchestrator | 2025-05-14 15:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:47.359518 | orchestrator | 2025-05-14 15:08:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:47.359626 | orchestrator | 2025-05-14 15:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:50.409539 | orchestrator | 2025-05-14 15:08:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:50.409665 | orchestrator | 2025-05-14 15:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:53.458696 | orchestrator | 2025-05-14 15:08:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:53.458818 | orchestrator | 2025-05-14 15:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:56.501747 | orchestrator | 2025-05-14 15:08:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:56.501850 | orchestrator | 2025-05-14 15:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:08:59.550694 | orchestrator | 2025-05-14 15:08:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:08:59.550802 | orchestrator | 2025-05-14 15:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:02.603025 | orchestrator | 2025-05-14 15:09:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:02.603129 | orchestrator | 2025-05-14 15:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:05.655707 | orchestrator | 2025-05-14 15:09:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:05.655821 | orchestrator | 2025-05-14 15:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:08.703020 | orchestrator | 2025-05-14 15:09:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:08.703127 | orchestrator | 2025-05-14 15:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:11.752781 | orchestrator | 2025-05-14 15:09:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:11.752888 | orchestrator | 2025-05-14 15:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:14.797372 | orchestrator | 2025-05-14 15:09:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:14.797530 | orchestrator | 2025-05-14 15:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:17.848748 | orchestrator | 2025-05-14 15:09:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:17.848848 | orchestrator | 2025-05-14 15:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:20.902185 | orchestrator | 2025-05-14 15:09:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:20.902312 | orchestrator | 2025-05-14 15:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:23.950247 | orchestrator | 2025-05-14 15:09:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:23.950351 | orchestrator | 2025-05-14 15:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:26.995864 | orchestrator | 2025-05-14 15:09:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:26.996016 | orchestrator | 2025-05-14 15:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:30.044999 | orchestrator | 2025-05-14 15:09:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:30.045114 | orchestrator | 2025-05-14 15:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:33.086168 | orchestrator | 2025-05-14 15:09:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:33.086274 | orchestrator | 2025-05-14 15:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:36.141063 | orchestrator | 2025-05-14 15:09:36 | INFO  | Task e7bdb927-cac8-4506-ba0e-99fdfab9102a is in state STARTED 2025-05-14 15:09:36.142182 | orchestrator | 2025-05-14 15:09:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:36.142238 | orchestrator | 2025-05-14 15:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:39.216011 | orchestrator | 2025-05-14 15:09:39 | INFO  | Task e7bdb927-cac8-4506-ba0e-99fdfab9102a is in state STARTED 2025-05-14 15:09:39.217019 | orchestrator | 2025-05-14 15:09:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:39.217620 | orchestrator | 2025-05-14 15:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:42.285920 | orchestrator | 2025-05-14 15:09:42 | INFO  | Task e7bdb927-cac8-4506-ba0e-99fdfab9102a is in state STARTED 2025-05-14 15:09:42.287354 | orchestrator | 2025-05-14 15:09:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:42.287406 | orchestrator | 2025-05-14 15:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:45.332687 | orchestrator | 2025-05-14 15:09:45 | INFO  | Task e7bdb927-cac8-4506-ba0e-99fdfab9102a is in state STARTED 2025-05-14 15:09:45.334367 | orchestrator | 2025-05-14 15:09:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:45.334471 | orchestrator | 2025-05-14 15:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:48.386609 | orchestrator | 2025-05-14 15:09:48 | INFO  | Task e7bdb927-cac8-4506-ba0e-99fdfab9102a is in state SUCCESS 2025-05-14 15:09:48.388911 | orchestrator | 2025-05-14 15:09:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:48.388943 | orchestrator | 2025-05-14 15:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:51.444927 | orchestrator | 2025-05-14 15:09:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:51.445051 | orchestrator | 2025-05-14 15:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:54.494123 | orchestrator | 2025-05-14 15:09:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:54.494205 | orchestrator | 2025-05-14 15:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:09:57.550765 | orchestrator | 2025-05-14 15:09:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:09:57.550815 | orchestrator | 2025-05-14 15:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:00.603205 | orchestrator | 2025-05-14 15:10:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:00.603369 | orchestrator | 2025-05-14 15:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:03.653981 | orchestrator | 2025-05-14 15:10:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:03.654203 | orchestrator | 2025-05-14 15:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:06.706330 | orchestrator | 2025-05-14 15:10:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:06.706490 | orchestrator | 2025-05-14 15:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:09.755612 | orchestrator | 2025-05-14 15:10:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:09.755718 | orchestrator | 2025-05-14 15:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:12.810396 | orchestrator | 2025-05-14 15:10:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:12.810548 | orchestrator | 2025-05-14 15:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:15.855962 | orchestrator | 2025-05-14 15:10:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:15.856083 | orchestrator | 2025-05-14 15:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:18.901145 | orchestrator | 2025-05-14 15:10:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:18.901239 | orchestrator | 2025-05-14 15:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:21.952282 | orchestrator | 2025-05-14 15:10:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:21.952413 | orchestrator | 2025-05-14 15:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:25.009235 | orchestrator | 2025-05-14 15:10:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:25.009318 | orchestrator | 2025-05-14 15:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:28.057941 | orchestrator | 2025-05-14 15:10:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:28.058789 | orchestrator | 2025-05-14 15:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:31.099020 | orchestrator | 2025-05-14 15:10:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:31.099136 | orchestrator | 2025-05-14 15:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:34.150301 | orchestrator | 2025-05-14 15:10:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:34.150498 | orchestrator | 2025-05-14 15:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:37.197320 | orchestrator | 2025-05-14 15:10:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:37.197486 | orchestrator | 2025-05-14 15:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:40.255932 | orchestrator | 2025-05-14 15:10:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:40.256062 | orchestrator | 2025-05-14 15:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:43.312835 | orchestrator | 2025-05-14 15:10:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:43.312954 | orchestrator | 2025-05-14 15:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:46.374620 | orchestrator | 2025-05-14 15:10:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:46.374722 | orchestrator | 2025-05-14 15:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:49.420909 | orchestrator | 2025-05-14 15:10:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:49.421028 | orchestrator | 2025-05-14 15:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:52.468861 | orchestrator | 2025-05-14 15:10:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:52.468967 | orchestrator | 2025-05-14 15:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:55.528788 | orchestrator | 2025-05-14 15:10:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:55.528911 | orchestrator | 2025-05-14 15:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:10:58.579878 | orchestrator | 2025-05-14 15:10:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:10:58.579984 | orchestrator | 2025-05-14 15:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:01.627005 | orchestrator | 2025-05-14 15:11:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:01.627080 | orchestrator | 2025-05-14 15:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:04.672885 | orchestrator | 2025-05-14 15:11:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:04.672993 | orchestrator | 2025-05-14 15:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:07.722171 | orchestrator | 2025-05-14 15:11:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:07.722306 | orchestrator | 2025-05-14 15:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:10.767948 | orchestrator | 2025-05-14 15:11:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:10.768282 | orchestrator | 2025-05-14 15:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:13.821079 | orchestrator | 2025-05-14 15:11:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:13.821206 | orchestrator | 2025-05-14 15:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:16.873902 | orchestrator | 2025-05-14 15:11:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:16.874075 | orchestrator | 2025-05-14 15:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:19.918777 | orchestrator | 2025-05-14 15:11:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:19.919583 | orchestrator | 2025-05-14 15:11:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:22.964724 | orchestrator | 2025-05-14 15:11:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:22.964810 | orchestrator | 2025-05-14 15:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:26.032503 | orchestrator | 2025-05-14 15:11:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:26.032612 | orchestrator | 2025-05-14 15:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:29.064901 | orchestrator | 2025-05-14 15:11:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:29.064999 | orchestrator | 2025-05-14 15:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:32.122944 | orchestrator | 2025-05-14 15:11:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:32.123055 | orchestrator | 2025-05-14 15:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:35.177599 | orchestrator | 2025-05-14 15:11:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:35.177715 | orchestrator | 2025-05-14 15:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:38.237317 | orchestrator | 2025-05-14 15:11:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:38.237474 | orchestrator | 2025-05-14 15:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:41.287978 | orchestrator | 2025-05-14 15:11:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:41.288115 | orchestrator | 2025-05-14 15:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:44.349777 | orchestrator | 2025-05-14 15:11:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:44.349909 | orchestrator | 2025-05-14 15:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:47.405611 | orchestrator | 2025-05-14 15:11:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:47.405726 | orchestrator | 2025-05-14 15:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:50.457643 | orchestrator | 2025-05-14 15:11:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:50.457755 | orchestrator | 2025-05-14 15:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:53.510754 | orchestrator | 2025-05-14 15:11:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:53.510858 | orchestrator | 2025-05-14 15:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:56.568031 | orchestrator | 2025-05-14 15:11:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:56.568118 | orchestrator | 2025-05-14 15:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:11:59.614581 | orchestrator | 2025-05-14 15:11:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:11:59.614689 | orchestrator | 2025-05-14 15:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:02.671205 | orchestrator | 2025-05-14 15:12:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:02.671315 | orchestrator | 2025-05-14 15:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:05.731834 | orchestrator | 2025-05-14 15:12:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:05.731955 | orchestrator | 2025-05-14 15:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:08.775013 | orchestrator | 2025-05-14 15:12:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:08.775131 | orchestrator | 2025-05-14 15:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:11.831436 | orchestrator | 2025-05-14 15:12:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:11.831506 | orchestrator | 2025-05-14 15:12:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:14.889801 | orchestrator | 2025-05-14 15:12:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:14.889927 | orchestrator | 2025-05-14 15:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:17.942907 | orchestrator | 2025-05-14 15:12:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:17.943015 | orchestrator | 2025-05-14 15:12:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:20.994692 | orchestrator | 2025-05-14 15:12:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:20.994799 | orchestrator | 2025-05-14 15:12:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:24.053659 | orchestrator | 2025-05-14 15:12:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:24.053752 | orchestrator | 2025-05-14 15:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:27.111834 | orchestrator | 2025-05-14 15:12:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:27.111935 | orchestrator | 2025-05-14 15:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:30.171284 | orchestrator | 2025-05-14 15:12:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:30.171467 | orchestrator | 2025-05-14 15:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:33.221684 | orchestrator | 2025-05-14 15:12:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:33.221796 | orchestrator | 2025-05-14 15:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:36.269625 | orchestrator | 2025-05-14 15:12:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:36.269733 | orchestrator | 2025-05-14 15:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:39.311448 | orchestrator | 2025-05-14 15:12:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:39.311566 | orchestrator | 2025-05-14 15:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:42.358928 | orchestrator | 2025-05-14 15:12:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:42.359038 | orchestrator | 2025-05-14 15:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:45.404231 | orchestrator | 2025-05-14 15:12:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:45.404402 | orchestrator | 2025-05-14 15:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:48.442653 | orchestrator | 2025-05-14 15:12:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:48.442757 | orchestrator | 2025-05-14 15:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:51.491353 | orchestrator | 2025-05-14 15:12:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:51.491468 | orchestrator | 2025-05-14 15:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:54.538746 | orchestrator | 2025-05-14 15:12:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:54.538851 | orchestrator | 2025-05-14 15:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:12:57.618842 | orchestrator | 2025-05-14 15:12:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:12:57.618954 | orchestrator | 2025-05-14 15:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:00.667116 | orchestrator | 2025-05-14 15:13:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:00.667209 | orchestrator | 2025-05-14 15:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:03.724455 | orchestrator | 2025-05-14 15:13:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:03.724577 | orchestrator | 2025-05-14 15:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:06.776784 | orchestrator | 2025-05-14 15:13:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:06.776889 | orchestrator | 2025-05-14 15:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:09.832538 | orchestrator | 2025-05-14 15:13:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:09.832640 | orchestrator | 2025-05-14 15:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:12.887972 | orchestrator | 2025-05-14 15:13:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:12.888275 | orchestrator | 2025-05-14 15:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:15.944110 | orchestrator | 2025-05-14 15:13:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:15.944221 | orchestrator | 2025-05-14 15:13:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:18.994528 | orchestrator | 2025-05-14 15:13:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:18.994631 | orchestrator | 2025-05-14 15:13:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:22.064446 | orchestrator | 2025-05-14 15:13:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:22.064513 | orchestrator | 2025-05-14 15:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:25.120894 | orchestrator | 2025-05-14 15:13:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:25.120956 | orchestrator | 2025-05-14 15:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:28.168580 | orchestrator | 2025-05-14 15:13:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:28.168688 | orchestrator | 2025-05-14 15:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:31.221380 | orchestrator | 2025-05-14 15:13:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:31.221459 | orchestrator | 2025-05-14 15:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:34.271073 | orchestrator | 2025-05-14 15:13:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:34.271187 | orchestrator | 2025-05-14 15:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:37.324540 | orchestrator | 2025-05-14 15:13:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:37.324643 | orchestrator | 2025-05-14 15:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:40.375793 | orchestrator | 2025-05-14 15:13:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:40.375907 | orchestrator | 2025-05-14 15:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:43.434923 | orchestrator | 2025-05-14 15:13:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:43.435043 | orchestrator | 2025-05-14 15:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:46.485094 | orchestrator | 2025-05-14 15:13:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:46.485198 | orchestrator | 2025-05-14 15:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:49.537763 | orchestrator | 2025-05-14 15:13:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:49.537824 | orchestrator | 2025-05-14 15:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:52.575590 | orchestrator | 2025-05-14 15:13:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:52.575666 | orchestrator | 2025-05-14 15:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:55.615638 | orchestrator | 2025-05-14 15:13:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:55.615746 | orchestrator | 2025-05-14 15:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:13:58.666899 | orchestrator | 2025-05-14 15:13:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:13:58.667035 | orchestrator | 2025-05-14 15:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:01.731720 | orchestrator | 2025-05-14 15:14:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:01.731876 | orchestrator | 2025-05-14 15:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:04.805211 | orchestrator | 2025-05-14 15:14:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:04.805340 | orchestrator | 2025-05-14 15:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:07.847758 | orchestrator | 2025-05-14 15:14:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:07.847867 | orchestrator | 2025-05-14 15:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:10.902640 | orchestrator | 2025-05-14 15:14:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:10.902745 | orchestrator | 2025-05-14 15:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:13.962308 | orchestrator | 2025-05-14 15:14:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:13.962419 | orchestrator | 2025-05-14 15:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:17.014736 | orchestrator | 2025-05-14 15:14:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:17.014838 | orchestrator | 2025-05-14 15:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:20.071769 | orchestrator | 2025-05-14 15:14:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:20.071898 | orchestrator | 2025-05-14 15:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:23.126073 | orchestrator | 2025-05-14 15:14:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:23.126199 | orchestrator | 2025-05-14 15:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:26.172757 | orchestrator | 2025-05-14 15:14:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:26.172865 | orchestrator | 2025-05-14 15:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:29.222749 | orchestrator | 2025-05-14 15:14:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:29.222853 | orchestrator | 2025-05-14 15:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:32.270810 | orchestrator | 2025-05-14 15:14:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:32.270929 | orchestrator | 2025-05-14 15:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:35.320485 | orchestrator | 2025-05-14 15:14:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:35.320571 | orchestrator | 2025-05-14 15:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:38.371084 | orchestrator | 2025-05-14 15:14:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:38.371174 | orchestrator | 2025-05-14 15:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:41.419922 | orchestrator | 2025-05-14 15:14:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:41.420026 | orchestrator | 2025-05-14 15:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:44.473542 | orchestrator | 2025-05-14 15:14:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:44.473654 | orchestrator | 2025-05-14 15:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:47.518849 | orchestrator | 2025-05-14 15:14:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:47.518960 | orchestrator | 2025-05-14 15:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:50.570281 | orchestrator | 2025-05-14 15:14:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:50.570373 | orchestrator | 2025-05-14 15:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:53.617800 | orchestrator | 2025-05-14 15:14:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:53.617890 | orchestrator | 2025-05-14 15:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:56.662763 | orchestrator | 2025-05-14 15:14:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:56.662875 | orchestrator | 2025-05-14 15:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:14:59.711171 | orchestrator | 2025-05-14 15:14:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:14:59.711280 | orchestrator | 2025-05-14 15:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:02.758586 | orchestrator | 2025-05-14 15:15:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:02.758660 | orchestrator | 2025-05-14 15:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:05.809478 | orchestrator | 2025-05-14 15:15:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:05.809585 | orchestrator | 2025-05-14 15:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:08.858879 | orchestrator | 2025-05-14 15:15:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:08.858982 | orchestrator | 2025-05-14 15:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:11.906392 | orchestrator | 2025-05-14 15:15:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:11.906643 | orchestrator | 2025-05-14 15:15:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:14.955685 | orchestrator | 2025-05-14 15:15:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:14.955823 | orchestrator | 2025-05-14 15:15:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:18.000828 | orchestrator | 2025-05-14 15:15:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:18.000941 | orchestrator | 2025-05-14 15:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:21.050909 | orchestrator | 2025-05-14 15:15:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:21.051082 | orchestrator | 2025-05-14 15:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:24.097239 | orchestrator | 2025-05-14 15:15:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:24.097376 | orchestrator | 2025-05-14 15:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:27.161105 | orchestrator | 2025-05-14 15:15:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:27.161284 | orchestrator | 2025-05-14 15:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:30.212683 | orchestrator | 2025-05-14 15:15:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:30.212817 | orchestrator | 2025-05-14 15:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:33.260516 | orchestrator | 2025-05-14 15:15:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:33.260634 | orchestrator | 2025-05-14 15:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:36.305990 | orchestrator | 2025-05-14 15:15:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:36.306320 | orchestrator | 2025-05-14 15:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:39.355903 | orchestrator | 2025-05-14 15:15:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:39.356035 | orchestrator | 2025-05-14 15:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:42.413463 | orchestrator | 2025-05-14 15:15:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:42.413594 | orchestrator | 2025-05-14 15:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:45.458782 | orchestrator | 2025-05-14 15:15:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:45.458951 | orchestrator | 2025-05-14 15:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:48.510395 | orchestrator | 2025-05-14 15:15:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:48.510494 | orchestrator | 2025-05-14 15:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:51.563071 | orchestrator | 2025-05-14 15:15:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:51.563221 | orchestrator | 2025-05-14 15:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:54.607594 | orchestrator | 2025-05-14 15:15:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:54.607700 | orchestrator | 2025-05-14 15:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:15:57.651222 | orchestrator | 2025-05-14 15:15:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:15:57.651372 | orchestrator | 2025-05-14 15:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:00.707436 | orchestrator | 2025-05-14 15:16:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:00.707540 | orchestrator | 2025-05-14 15:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:03.749485 | orchestrator | 2025-05-14 15:16:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:03.749599 | orchestrator | 2025-05-14 15:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:06.793203 | orchestrator | 2025-05-14 15:16:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:06.793506 | orchestrator | 2025-05-14 15:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:09.846466 | orchestrator | 2025-05-14 15:16:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:09.846576 | orchestrator | 2025-05-14 15:16:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:12.892399 | orchestrator | 2025-05-14 15:16:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:12.892534 | orchestrator | 2025-05-14 15:16:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:15.940954 | orchestrator | 2025-05-14 15:16:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:15.941096 | orchestrator | 2025-05-14 15:16:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:18.989839 | orchestrator | 2025-05-14 15:16:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:18.989985 | orchestrator | 2025-05-14 15:16:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:22.037674 | orchestrator | 2025-05-14 15:16:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:22.037761 | orchestrator | 2025-05-14 15:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:25.082359 | orchestrator | 2025-05-14 15:16:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:25.082481 | orchestrator | 2025-05-14 15:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:28.132236 | orchestrator | 2025-05-14 15:16:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:28.132371 | orchestrator | 2025-05-14 15:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:31.175487 | orchestrator | 2025-05-14 15:16:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:31.175604 | orchestrator | 2025-05-14 15:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:34.229466 | orchestrator | 2025-05-14 15:16:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:34.229561 | orchestrator | 2025-05-14 15:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:37.290802 | orchestrator | 2025-05-14 15:16:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:37.290911 | orchestrator | 2025-05-14 15:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:40.352311 | orchestrator | 2025-05-14 15:16:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:40.352672 | orchestrator | 2025-05-14 15:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:43.413437 | orchestrator | 2025-05-14 15:16:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:43.413546 | orchestrator | 2025-05-14 15:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:46.471386 | orchestrator | 2025-05-14 15:16:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:46.471497 | orchestrator | 2025-05-14 15:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:49.523596 | orchestrator | 2025-05-14 15:16:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:49.523706 | orchestrator | 2025-05-14 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:52.589742 | orchestrator | 2025-05-14 15:16:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:52.589886 | orchestrator | 2025-05-14 15:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:55.644539 | orchestrator | 2025-05-14 15:16:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:55.644694 | orchestrator | 2025-05-14 15:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:16:58.701289 | orchestrator | 2025-05-14 15:16:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:16:58.701408 | orchestrator | 2025-05-14 15:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:01.763031 | orchestrator | 2025-05-14 15:17:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:01.763178 | orchestrator | 2025-05-14 15:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:04.814284 | orchestrator | 2025-05-14 15:17:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:04.814416 | orchestrator | 2025-05-14 15:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:07.861601 | orchestrator | 2025-05-14 15:17:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:07.861711 | orchestrator | 2025-05-14 15:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:10.911730 | orchestrator | 2025-05-14 15:17:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:10.911854 | orchestrator | 2025-05-14 15:17:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:13.979026 | orchestrator | 2025-05-14 15:17:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:13.979173 | orchestrator | 2025-05-14 15:17:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:17.040807 | orchestrator | 2025-05-14 15:17:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:17.040922 | orchestrator | 2025-05-14 15:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:20.094253 | orchestrator | 2025-05-14 15:17:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:20.094344 | orchestrator | 2025-05-14 15:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:23.148236 | orchestrator | 2025-05-14 15:17:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:23.148349 | orchestrator | 2025-05-14 15:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:26.204647 | orchestrator | 2025-05-14 15:17:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:26.204776 | orchestrator | 2025-05-14 15:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:29.257077 | orchestrator | 2025-05-14 15:17:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:29.257255 | orchestrator | 2025-05-14 15:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:32.314526 | orchestrator | 2025-05-14 15:17:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:32.314632 | orchestrator | 2025-05-14 15:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:35.363273 | orchestrator | 2025-05-14 15:17:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:35.363383 | orchestrator | 2025-05-14 15:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:38.413902 | orchestrator | 2025-05-14 15:17:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:38.414065 | orchestrator | 2025-05-14 15:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:41.467157 | orchestrator | 2025-05-14 15:17:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:41.467267 | orchestrator | 2025-05-14 15:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:44.520559 | orchestrator | 2025-05-14 15:17:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:44.520664 | orchestrator | 2025-05-14 15:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:47.571111 | orchestrator | 2025-05-14 15:17:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:47.571245 | orchestrator | 2025-05-14 15:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:50.619290 | orchestrator | 2025-05-14 15:17:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:50.619395 | orchestrator | 2025-05-14 15:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:53.658770 | orchestrator | 2025-05-14 15:17:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:53.658932 | orchestrator | 2025-05-14 15:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:56.703695 | orchestrator | 2025-05-14 15:17:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:56.703828 | orchestrator | 2025-05-14 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:17:59.748016 | orchestrator | 2025-05-14 15:17:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:17:59.748155 | orchestrator | 2025-05-14 15:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:02.798316 | orchestrator | 2025-05-14 15:18:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:02.798419 | orchestrator | 2025-05-14 15:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:05.849388 | orchestrator | 2025-05-14 15:18:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:05.849520 | orchestrator | 2025-05-14 15:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:08.900064 | orchestrator | 2025-05-14 15:18:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:08.900244 | orchestrator | 2025-05-14 15:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:11.954122 | orchestrator | 2025-05-14 15:18:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:11.954241 | orchestrator | 2025-05-14 15:18:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:15.024710 | orchestrator | 2025-05-14 15:18:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:15.024842 | orchestrator | 2025-05-14 15:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:18.073119 | orchestrator | 2025-05-14 15:18:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:18.073235 | orchestrator | 2025-05-14 15:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:21.127626 | orchestrator | 2025-05-14 15:18:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:21.127730 | orchestrator | 2025-05-14 15:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:24.177475 | orchestrator | 2025-05-14 15:18:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:24.177642 | orchestrator | 2025-05-14 15:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:27.228127 | orchestrator | 2025-05-14 15:18:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:27.228235 | orchestrator | 2025-05-14 15:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:30.277236 | orchestrator | 2025-05-14 15:18:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:30.277353 | orchestrator | 2025-05-14 15:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:33.328730 | orchestrator | 2025-05-14 15:18:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:33.328858 | orchestrator | 2025-05-14 15:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:36.377405 | orchestrator | 2025-05-14 15:18:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:36.377561 | orchestrator | 2025-05-14 15:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:39.424685 | orchestrator | 2025-05-14 15:18:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:39.424824 | orchestrator | 2025-05-14 15:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:42.480786 | orchestrator | 2025-05-14 15:18:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:42.481336 | orchestrator | 2025-05-14 15:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:45.538171 | orchestrator | 2025-05-14 15:18:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:45.538299 | orchestrator | 2025-05-14 15:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:48.593950 | orchestrator | 2025-05-14 15:18:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:48.594126 | orchestrator | 2025-05-14 15:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:51.651542 | orchestrator | 2025-05-14 15:18:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:51.651705 | orchestrator | 2025-05-14 15:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:54.701497 | orchestrator | 2025-05-14 15:18:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:54.701634 | orchestrator | 2025-05-14 15:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:18:57.746205 | orchestrator | 2025-05-14 15:18:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:18:57.746305 | orchestrator | 2025-05-14 15:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:00.807609 | orchestrator | 2025-05-14 15:19:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:00.807723 | orchestrator | 2025-05-14 15:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:03.873928 | orchestrator | 2025-05-14 15:19:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:03.874249 | orchestrator | 2025-05-14 15:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:06.923674 | orchestrator | 2025-05-14 15:19:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:06.923785 | orchestrator | 2025-05-14 15:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:09.977721 | orchestrator | 2025-05-14 15:19:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:09.977871 | orchestrator | 2025-05-14 15:19:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:13.036237 | orchestrator | 2025-05-14 15:19:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:13.036368 | orchestrator | 2025-05-14 15:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:16.085518 | orchestrator | 2025-05-14 15:19:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:16.085620 | orchestrator | 2025-05-14 15:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:19.134798 | orchestrator | 2025-05-14 15:19:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:19.134916 | orchestrator | 2025-05-14 15:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:22.183011 | orchestrator | 2025-05-14 15:19:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:22.183171 | orchestrator | 2025-05-14 15:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:25.231505 | orchestrator | 2025-05-14 15:19:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:25.231601 | orchestrator | 2025-05-14 15:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:28.281860 | orchestrator | 2025-05-14 15:19:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:28.281966 | orchestrator | 2025-05-14 15:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:31.334915 | orchestrator | 2025-05-14 15:19:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:31.335089 | orchestrator | 2025-05-14 15:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:34.395592 | orchestrator | 2025-05-14 15:19:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:34.395705 | orchestrator | 2025-05-14 15:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:37.454512 | orchestrator | 2025-05-14 15:19:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:37.457772 | orchestrator | 2025-05-14 15:19:37 | INFO  | Task 70e53c32-090f-45bd-8789-25dcf599be16 is in state STARTED 2025-05-14 15:19:37.457919 | orchestrator | 2025-05-14 15:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:40.524216 | orchestrator | 2025-05-14 15:19:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:40.526459 | orchestrator | 2025-05-14 15:19:40 | INFO  | Task 70e53c32-090f-45bd-8789-25dcf599be16 is in state STARTED 2025-05-14 15:19:40.526498 | orchestrator | 2025-05-14 15:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:43.584352 | orchestrator | 2025-05-14 15:19:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:43.585193 | orchestrator | 2025-05-14 15:19:43 | INFO  | Task 70e53c32-090f-45bd-8789-25dcf599be16 is in state STARTED 2025-05-14 15:19:43.585227 | orchestrator | 2025-05-14 15:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:46.633492 | orchestrator | 2025-05-14 15:19:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:46.634253 | orchestrator | 2025-05-14 15:19:46 | INFO  | Task 70e53c32-090f-45bd-8789-25dcf599be16 is in state SUCCESS 2025-05-14 15:19:46.634292 | orchestrator | 2025-05-14 15:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:49.688912 | orchestrator | 2025-05-14 15:19:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:49.689079 | orchestrator | 2025-05-14 15:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:52.743796 | orchestrator | 2025-05-14 15:19:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:52.743899 | orchestrator | 2025-05-14 15:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:55.797957 | orchestrator | 2025-05-14 15:19:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:55.798145 | orchestrator | 2025-05-14 15:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:19:58.840399 | orchestrator | 2025-05-14 15:19:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:19:58.840499 | orchestrator | 2025-05-14 15:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:01.896102 | orchestrator | 2025-05-14 15:20:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:01.896218 | orchestrator | 2025-05-14 15:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:04.948804 | orchestrator | 2025-05-14 15:20:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:04.948904 | orchestrator | 2025-05-14 15:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:08.000591 | orchestrator | 2025-05-14 15:20:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:08.000706 | orchestrator | 2025-05-14 15:20:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:11.053189 | orchestrator | 2025-05-14 15:20:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:11.053299 | orchestrator | 2025-05-14 15:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:14.106375 | orchestrator | 2025-05-14 15:20:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:14.106510 | orchestrator | 2025-05-14 15:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:17.154358 | orchestrator | 2025-05-14 15:20:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:17.154467 | orchestrator | 2025-05-14 15:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:20.207682 | orchestrator | 2025-05-14 15:20:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:20.207784 | orchestrator | 2025-05-14 15:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:23.256571 | orchestrator | 2025-05-14 15:20:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:23.256679 | orchestrator | 2025-05-14 15:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:26.296825 | orchestrator | 2025-05-14 15:20:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:26.296939 | orchestrator | 2025-05-14 15:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:29.348983 | orchestrator | 2025-05-14 15:20:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:29.349137 | orchestrator | 2025-05-14 15:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:32.394791 | orchestrator | 2025-05-14 15:20:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:32.394903 | orchestrator | 2025-05-14 15:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:35.448400 | orchestrator | 2025-05-14 15:20:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:35.448529 | orchestrator | 2025-05-14 15:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:38.498398 | orchestrator | 2025-05-14 15:20:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:38.498513 | orchestrator | 2025-05-14 15:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:41.548063 | orchestrator | 2025-05-14 15:20:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:41.548152 | orchestrator | 2025-05-14 15:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:44.595512 | orchestrator | 2025-05-14 15:20:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:44.595665 | orchestrator | 2025-05-14 15:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:47.642253 | orchestrator | 2025-05-14 15:20:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:47.642380 | orchestrator | 2025-05-14 15:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:50.696926 | orchestrator | 2025-05-14 15:20:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:50.697133 | orchestrator | 2025-05-14 15:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:53.745533 | orchestrator | 2025-05-14 15:20:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:53.745667 | orchestrator | 2025-05-14 15:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:56.789450 | orchestrator | 2025-05-14 15:20:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:56.789559 | orchestrator | 2025-05-14 15:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:20:59.839882 | orchestrator | 2025-05-14 15:20:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:20:59.840163 | orchestrator | 2025-05-14 15:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:02.892131 | orchestrator | 2025-05-14 15:21:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:02.892261 | orchestrator | 2025-05-14 15:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:05.942901 | orchestrator | 2025-05-14 15:21:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:05.943064 | orchestrator | 2025-05-14 15:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:08.992349 | orchestrator | 2025-05-14 15:21:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:08.992458 | orchestrator | 2025-05-14 15:21:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:12.038644 | orchestrator | 2025-05-14 15:21:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:12.038724 | orchestrator | 2025-05-14 15:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:15.088158 | orchestrator | 2025-05-14 15:21:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:15.088266 | orchestrator | 2025-05-14 15:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:18.142860 | orchestrator | 2025-05-14 15:21:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:18.142959 | orchestrator | 2025-05-14 15:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:21.200097 | orchestrator | 2025-05-14 15:21:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:21.200242 | orchestrator | 2025-05-14 15:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:24.264733 | orchestrator | 2025-05-14 15:21:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:24.264843 | orchestrator | 2025-05-14 15:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:27.312708 | orchestrator | 2025-05-14 15:21:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:27.312818 | orchestrator | 2025-05-14 15:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:30.360134 | orchestrator | 2025-05-14 15:21:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:30.360240 | orchestrator | 2025-05-14 15:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:33.417548 | orchestrator | 2025-05-14 15:21:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:33.417656 | orchestrator | 2025-05-14 15:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:36.464451 | orchestrator | 2025-05-14 15:21:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:36.464527 | orchestrator | 2025-05-14 15:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:39.515269 | orchestrator | 2025-05-14 15:21:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:39.515381 | orchestrator | 2025-05-14 15:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:42.566561 | orchestrator | 2025-05-14 15:21:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:42.566678 | orchestrator | 2025-05-14 15:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:45.621422 | orchestrator | 2025-05-14 15:21:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:45.621535 | orchestrator | 2025-05-14 15:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:48.674429 | orchestrator | 2025-05-14 15:21:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:48.674544 | orchestrator | 2025-05-14 15:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:51.729617 | orchestrator | 2025-05-14 15:21:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:51.729712 | orchestrator | 2025-05-14 15:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:54.780282 | orchestrator | 2025-05-14 15:21:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:54.780421 | orchestrator | 2025-05-14 15:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:21:57.842185 | orchestrator | 2025-05-14 15:21:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:21:57.842292 | orchestrator | 2025-05-14 15:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:00.894496 | orchestrator | 2025-05-14 15:22:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:00.894591 | orchestrator | 2025-05-14 15:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:03.948799 | orchestrator | 2025-05-14 15:22:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:03.948910 | orchestrator | 2025-05-14 15:22:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:06.990122 | orchestrator | 2025-05-14 15:22:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:06.990217 | orchestrator | 2025-05-14 15:22:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:10.033193 | orchestrator | 2025-05-14 15:22:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:10.033365 | orchestrator | 2025-05-14 15:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:13.086594 | orchestrator | 2025-05-14 15:22:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:13.086714 | orchestrator | 2025-05-14 15:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:16.131848 | orchestrator | 2025-05-14 15:22:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:16.131937 | orchestrator | 2025-05-14 15:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:19.178693 | orchestrator | 2025-05-14 15:22:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:19.178790 | orchestrator | 2025-05-14 15:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:22.232711 | orchestrator | 2025-05-14 15:22:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:22.232817 | orchestrator | 2025-05-14 15:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:25.278611 | orchestrator | 2025-05-14 15:22:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:25.278743 | orchestrator | 2025-05-14 15:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:28.321426 | orchestrator | 2025-05-14 15:22:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:28.321532 | orchestrator | 2025-05-14 15:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:31.375917 | orchestrator | 2025-05-14 15:22:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:31.376122 | orchestrator | 2025-05-14 15:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:34.434941 | orchestrator | 2025-05-14 15:22:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:34.435122 | orchestrator | 2025-05-14 15:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:37.486580 | orchestrator | 2025-05-14 15:22:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:37.486693 | orchestrator | 2025-05-14 15:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:40.535320 | orchestrator | 2025-05-14 15:22:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:40.535405 | orchestrator | 2025-05-14 15:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:43.597870 | orchestrator | 2025-05-14 15:22:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:43.598088 | orchestrator | 2025-05-14 15:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:46.643837 | orchestrator | 2025-05-14 15:22:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:46.643923 | orchestrator | 2025-05-14 15:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:49.697442 | orchestrator | 2025-05-14 15:22:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:49.697570 | orchestrator | 2025-05-14 15:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:52.761220 | orchestrator | 2025-05-14 15:22:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:52.761327 | orchestrator | 2025-05-14 15:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:55.808146 | orchestrator | 2025-05-14 15:22:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:55.808295 | orchestrator | 2025-05-14 15:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:22:58.863679 | orchestrator | 2025-05-14 15:22:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:22:58.863790 | orchestrator | 2025-05-14 15:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:01.913863 | orchestrator | 2025-05-14 15:23:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:01.914069 | orchestrator | 2025-05-14 15:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:04.965435 | orchestrator | 2025-05-14 15:23:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:04.965541 | orchestrator | 2025-05-14 15:23:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:08.016185 | orchestrator | 2025-05-14 15:23:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:08.016323 | orchestrator | 2025-05-14 15:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:11.063709 | orchestrator | 2025-05-14 15:23:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:11.063874 | orchestrator | 2025-05-14 15:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:14.117754 | orchestrator | 2025-05-14 15:23:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:14.117882 | orchestrator | 2025-05-14 15:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:17.164859 | orchestrator | 2025-05-14 15:23:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:17.164961 | orchestrator | 2025-05-14 15:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:20.224143 | orchestrator | 2025-05-14 15:23:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:20.224247 | orchestrator | 2025-05-14 15:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:23.275691 | orchestrator | 2025-05-14 15:23:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:23.275820 | orchestrator | 2025-05-14 15:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:26.322915 | orchestrator | 2025-05-14 15:23:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:26.323075 | orchestrator | 2025-05-14 15:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:29.380070 | orchestrator | 2025-05-14 15:23:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:29.380207 | orchestrator | 2025-05-14 15:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:32.460924 | orchestrator | 2025-05-14 15:23:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:32.461090 | orchestrator | 2025-05-14 15:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:35.518640 | orchestrator | 2025-05-14 15:23:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:35.518750 | orchestrator | 2025-05-14 15:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:38.580876 | orchestrator | 2025-05-14 15:23:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:38.581082 | orchestrator | 2025-05-14 15:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:41.632095 | orchestrator | 2025-05-14 15:23:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:41.632183 | orchestrator | 2025-05-14 15:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:44.677240 | orchestrator | 2025-05-14 15:23:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:44.677347 | orchestrator | 2025-05-14 15:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:47.737547 | orchestrator | 2025-05-14 15:23:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:47.737658 | orchestrator | 2025-05-14 15:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:50.794405 | orchestrator | 2025-05-14 15:23:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:50.794516 | orchestrator | 2025-05-14 15:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:53.869295 | orchestrator | 2025-05-14 15:23:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:53.869400 | orchestrator | 2025-05-14 15:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:56.935429 | orchestrator | 2025-05-14 15:23:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:56.935544 | orchestrator | 2025-05-14 15:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:23:59.988837 | orchestrator | 2025-05-14 15:23:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:23:59.988936 | orchestrator | 2025-05-14 15:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:03.048406 | orchestrator | 2025-05-14 15:24:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:03.048537 | orchestrator | 2025-05-14 15:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:06.133430 | orchestrator | 2025-05-14 15:24:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:06.133541 | orchestrator | 2025-05-14 15:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:09.189344 | orchestrator | 2025-05-14 15:24:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:09.189473 | orchestrator | 2025-05-14 15:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:12.246074 | orchestrator | 2025-05-14 15:24:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:12.246162 | orchestrator | 2025-05-14 15:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:15.305086 | orchestrator | 2025-05-14 15:24:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:15.305182 | orchestrator | 2025-05-14 15:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:18.353539 | orchestrator | 2025-05-14 15:24:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:18.353642 | orchestrator | 2025-05-14 15:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:21.408133 | orchestrator | 2025-05-14 15:24:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:21.408269 | orchestrator | 2025-05-14 15:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:24.450538 | orchestrator | 2025-05-14 15:24:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:24.450669 | orchestrator | 2025-05-14 15:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:27.504008 | orchestrator | 2025-05-14 15:24:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:27.504126 | orchestrator | 2025-05-14 15:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:30.556539 | orchestrator | 2025-05-14 15:24:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:30.556637 | orchestrator | 2025-05-14 15:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:33.607803 | orchestrator | 2025-05-14 15:24:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:33.607931 | orchestrator | 2025-05-14 15:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:36.655061 | orchestrator | 2025-05-14 15:24:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:36.655141 | orchestrator | 2025-05-14 15:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:39.702302 | orchestrator | 2025-05-14 15:24:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:39.702421 | orchestrator | 2025-05-14 15:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:42.751484 | orchestrator | 2025-05-14 15:24:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:42.751572 | orchestrator | 2025-05-14 15:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:45.809801 | orchestrator | 2025-05-14 15:24:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:45.809891 | orchestrator | 2025-05-14 15:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:48.854406 | orchestrator | 2025-05-14 15:24:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:48.854511 | orchestrator | 2025-05-14 15:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:51.897426 | orchestrator | 2025-05-14 15:24:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:51.897528 | orchestrator | 2025-05-14 15:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:54.947025 | orchestrator | 2025-05-14 15:24:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:54.947117 | orchestrator | 2025-05-14 15:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:24:57.997413 | orchestrator | 2025-05-14 15:24:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:24:57.997513 | orchestrator | 2025-05-14 15:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:01.037510 | orchestrator | 2025-05-14 15:25:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:01.037622 | orchestrator | 2025-05-14 15:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:04.087657 | orchestrator | 2025-05-14 15:25:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:04.087816 | orchestrator | 2025-05-14 15:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:07.134301 | orchestrator | 2025-05-14 15:25:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:07.134413 | orchestrator | 2025-05-14 15:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:10.183142 | orchestrator | 2025-05-14 15:25:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:10.183265 | orchestrator | 2025-05-14 15:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:13.235179 | orchestrator | 2025-05-14 15:25:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:13.235284 | orchestrator | 2025-05-14 15:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:16.287073 | orchestrator | 2025-05-14 15:25:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:16.287160 | orchestrator | 2025-05-14 15:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:19.338693 | orchestrator | 2025-05-14 15:25:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:19.338757 | orchestrator | 2025-05-14 15:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:22.385859 | orchestrator | 2025-05-14 15:25:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:22.386123 | orchestrator | 2025-05-14 15:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:25.434440 | orchestrator | 2025-05-14 15:25:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:25.434566 | orchestrator | 2025-05-14 15:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:28.488573 | orchestrator | 2025-05-14 15:25:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:28.488689 | orchestrator | 2025-05-14 15:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:31.545192 | orchestrator | 2025-05-14 15:25:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:31.545299 | orchestrator | 2025-05-14 15:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:34.596543 | orchestrator | 2025-05-14 15:25:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:34.596654 | orchestrator | 2025-05-14 15:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:37.634939 | orchestrator | 2025-05-14 15:25:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:37.635113 | orchestrator | 2025-05-14 15:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:40.681630 | orchestrator | 2025-05-14 15:25:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:40.681756 | orchestrator | 2025-05-14 15:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:43.737312 | orchestrator | 2025-05-14 15:25:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:43.737433 | orchestrator | 2025-05-14 15:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:46.780599 | orchestrator | 2025-05-14 15:25:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:46.780702 | orchestrator | 2025-05-14 15:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:49.830239 | orchestrator | 2025-05-14 15:25:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:49.830354 | orchestrator | 2025-05-14 15:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:52.887341 | orchestrator | 2025-05-14 15:25:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:52.887482 | orchestrator | 2025-05-14 15:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:55.940634 | orchestrator | 2025-05-14 15:25:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:55.940723 | orchestrator | 2025-05-14 15:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:25:58.994214 | orchestrator | 2025-05-14 15:25:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:25:58.994313 | orchestrator | 2025-05-14 15:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:02.043183 | orchestrator | 2025-05-14 15:26:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:02.043333 | orchestrator | 2025-05-14 15:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:05.095492 | orchestrator | 2025-05-14 15:26:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:05.095603 | orchestrator | 2025-05-14 15:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:08.144561 | orchestrator | 2025-05-14 15:26:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:08.144679 | orchestrator | 2025-05-14 15:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:11.197732 | orchestrator | 2025-05-14 15:26:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:11.197824 | orchestrator | 2025-05-14 15:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:14.242101 | orchestrator | 2025-05-14 15:26:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:14.242218 | orchestrator | 2025-05-14 15:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:17.293628 | orchestrator | 2025-05-14 15:26:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:17.293717 | orchestrator | 2025-05-14 15:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:20.339709 | orchestrator | 2025-05-14 15:26:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:20.339799 | orchestrator | 2025-05-14 15:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:23.398772 | orchestrator | 2025-05-14 15:26:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:23.398876 | orchestrator | 2025-05-14 15:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:26.445802 | orchestrator | 2025-05-14 15:26:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:26.445908 | orchestrator | 2025-05-14 15:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:29.500073 | orchestrator | 2025-05-14 15:26:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:29.500193 | orchestrator | 2025-05-14 15:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:32.562558 | orchestrator | 2025-05-14 15:26:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:32.562667 | orchestrator | 2025-05-14 15:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:35.620390 | orchestrator | 2025-05-14 15:26:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:35.620503 | orchestrator | 2025-05-14 15:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:38.668610 | orchestrator | 2025-05-14 15:26:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:38.668742 | orchestrator | 2025-05-14 15:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:41.721336 | orchestrator | 2025-05-14 15:26:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:41.721450 | orchestrator | 2025-05-14 15:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:44.768636 | orchestrator | 2025-05-14 15:26:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:44.768784 | orchestrator | 2025-05-14 15:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:47.820626 | orchestrator | 2025-05-14 15:26:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:47.820747 | orchestrator | 2025-05-14 15:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:50.873659 | orchestrator | 2025-05-14 15:26:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:50.873783 | orchestrator | 2025-05-14 15:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:53.928311 | orchestrator | 2025-05-14 15:26:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:53.928424 | orchestrator | 2025-05-14 15:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:26:56.972321 | orchestrator | 2025-05-14 15:26:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:26:56.972429 | orchestrator | 2025-05-14 15:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:00.025281 | orchestrator | 2025-05-14 15:27:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:00.025397 | orchestrator | 2025-05-14 15:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:03.076980 | orchestrator | 2025-05-14 15:27:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:03.077092 | orchestrator | 2025-05-14 15:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:06.131511 | orchestrator | 2025-05-14 15:27:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:06.131627 | orchestrator | 2025-05-14 15:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:09.179088 | orchestrator | 2025-05-14 15:27:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:09.179223 | orchestrator | 2025-05-14 15:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:12.229141 | orchestrator | 2025-05-14 15:27:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:12.229284 | orchestrator | 2025-05-14 15:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:15.276047 | orchestrator | 2025-05-14 15:27:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:15.276153 | orchestrator | 2025-05-14 15:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:18.318117 | orchestrator | 2025-05-14 15:27:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:18.318257 | orchestrator | 2025-05-14 15:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:21.368832 | orchestrator | 2025-05-14 15:27:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:21.368991 | orchestrator | 2025-05-14 15:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:24.430269 | orchestrator | 2025-05-14 15:27:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:24.430407 | orchestrator | 2025-05-14 15:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:27.492733 | orchestrator | 2025-05-14 15:27:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:27.492842 | orchestrator | 2025-05-14 15:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:30.543478 | orchestrator | 2025-05-14 15:27:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:30.543588 | orchestrator | 2025-05-14 15:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:33.598755 | orchestrator | 2025-05-14 15:27:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:33.598811 | orchestrator | 2025-05-14 15:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:36.638395 | orchestrator | 2025-05-14 15:27:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:36.638510 | orchestrator | 2025-05-14 15:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:39.681267 | orchestrator | 2025-05-14 15:27:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:39.681365 | orchestrator | 2025-05-14 15:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:42.726467 | orchestrator | 2025-05-14 15:27:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:42.726578 | orchestrator | 2025-05-14 15:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:45.776068 | orchestrator | 2025-05-14 15:27:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:45.776165 | orchestrator | 2025-05-14 15:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:48.832077 | orchestrator | 2025-05-14 15:27:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:48.832183 | orchestrator | 2025-05-14 15:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:51.879489 | orchestrator | 2025-05-14 15:27:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:51.879584 | orchestrator | 2025-05-14 15:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:54.925230 | orchestrator | 2025-05-14 15:27:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:54.925347 | orchestrator | 2025-05-14 15:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:27:57.987534 | orchestrator | 2025-05-14 15:27:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:27:57.987655 | orchestrator | 2025-05-14 15:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:01.031862 | orchestrator | 2025-05-14 15:28:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:01.031983 | orchestrator | 2025-05-14 15:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:04.083658 | orchestrator | 2025-05-14 15:28:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:04.083785 | orchestrator | 2025-05-14 15:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:07.140865 | orchestrator | 2025-05-14 15:28:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:07.141018 | orchestrator | 2025-05-14 15:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:10.203395 | orchestrator | 2025-05-14 15:28:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:10.203533 | orchestrator | 2025-05-14 15:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:13.263526 | orchestrator | 2025-05-14 15:28:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:13.263666 | orchestrator | 2025-05-14 15:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:16.312675 | orchestrator | 2025-05-14 15:28:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:16.312803 | orchestrator | 2025-05-14 15:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:19.361568 | orchestrator | 2025-05-14 15:28:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:19.361705 | orchestrator | 2025-05-14 15:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:22.409073 | orchestrator | 2025-05-14 15:28:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:22.409191 | orchestrator | 2025-05-14 15:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:25.461858 | orchestrator | 2025-05-14 15:28:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:25.461962 | orchestrator | 2025-05-14 15:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:28.517035 | orchestrator | 2025-05-14 15:28:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:28.517144 | orchestrator | 2025-05-14 15:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:31.572268 | orchestrator | 2025-05-14 15:28:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:31.572355 | orchestrator | 2025-05-14 15:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:34.616624 | orchestrator | 2025-05-14 15:28:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:34.616724 | orchestrator | 2025-05-14 15:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:37.665568 | orchestrator | 2025-05-14 15:28:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:37.665679 | orchestrator | 2025-05-14 15:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:40.716610 | orchestrator | 2025-05-14 15:28:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:40.716688 | orchestrator | 2025-05-14 15:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:43.783121 | orchestrator | 2025-05-14 15:28:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:43.783262 | orchestrator | 2025-05-14 15:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:46.836687 | orchestrator | 2025-05-14 15:28:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:46.836860 | orchestrator | 2025-05-14 15:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:49.891122 | orchestrator | 2025-05-14 15:28:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:49.891230 | orchestrator | 2025-05-14 15:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:52.943456 | orchestrator | 2025-05-14 15:28:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:52.943596 | orchestrator | 2025-05-14 15:28:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:55.992879 | orchestrator | 2025-05-14 15:28:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:55.993058 | orchestrator | 2025-05-14 15:28:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:28:59.046409 | orchestrator | 2025-05-14 15:28:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:28:59.046506 | orchestrator | 2025-05-14 15:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:02.092632 | orchestrator | 2025-05-14 15:29:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:02.092727 | orchestrator | 2025-05-14 15:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:05.142230 | orchestrator | 2025-05-14 15:29:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:05.142340 | orchestrator | 2025-05-14 15:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:08.197203 | orchestrator | 2025-05-14 15:29:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:08.197315 | orchestrator | 2025-05-14 15:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:11.244712 | orchestrator | 2025-05-14 15:29:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:11.244797 | orchestrator | 2025-05-14 15:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:14.298702 | orchestrator | 2025-05-14 15:29:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:14.298824 | orchestrator | 2025-05-14 15:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:17.353203 | orchestrator | 2025-05-14 15:29:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:17.353398 | orchestrator | 2025-05-14 15:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:20.406548 | orchestrator | 2025-05-14 15:29:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:20.406637 | orchestrator | 2025-05-14 15:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:23.459022 | orchestrator | 2025-05-14 15:29:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:23.459129 | orchestrator | 2025-05-14 15:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:26.512864 | orchestrator | 2025-05-14 15:29:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:26.513033 | orchestrator | 2025-05-14 15:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:29.568692 | orchestrator | 2025-05-14 15:29:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:29.572698 | orchestrator | 2025-05-14 15:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:32.620595 | orchestrator | 2025-05-14 15:29:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:32.620760 | orchestrator | 2025-05-14 15:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:35.683377 | orchestrator | 2025-05-14 15:29:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:35.684162 | orchestrator | 2025-05-14 15:29:35 | INFO  | Task b2c1dd92-3db3-4501-8202-ead9978488cc is in state STARTED 2025-05-14 15:29:35.684238 | orchestrator | 2025-05-14 15:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:38.748843 | orchestrator | 2025-05-14 15:29:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:38.749322 | orchestrator | 2025-05-14 15:29:38 | INFO  | Task b2c1dd92-3db3-4501-8202-ead9978488cc is in state STARTED 2025-05-14 15:29:38.749497 | orchestrator | 2025-05-14 15:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:41.797032 | orchestrator | 2025-05-14 15:29:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:41.799747 | orchestrator | 2025-05-14 15:29:41 | INFO  | Task b2c1dd92-3db3-4501-8202-ead9978488cc is in state STARTED 2025-05-14 15:29:41.799839 | orchestrator | 2025-05-14 15:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:44.861637 | orchestrator | 2025-05-14 15:29:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:44.862410 | orchestrator | 2025-05-14 15:29:44 | INFO  | Task b2c1dd92-3db3-4501-8202-ead9978488cc is in state STARTED 2025-05-14 15:29:44.862448 | orchestrator | 2025-05-14 15:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:47.910600 | orchestrator | 2025-05-14 15:29:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:47.911246 | orchestrator | 2025-05-14 15:29:47 | INFO  | Task b2c1dd92-3db3-4501-8202-ead9978488cc is in state SUCCESS 2025-05-14 15:29:47.911357 | orchestrator | 2025-05-14 15:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:50.960653 | orchestrator | 2025-05-14 15:29:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:50.960765 | orchestrator | 2025-05-14 15:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:54.010106 | orchestrator | 2025-05-14 15:29:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:54.010241 | orchestrator | 2025-05-14 15:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:29:57.063716 | orchestrator | 2025-05-14 15:29:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:29:57.063822 | orchestrator | 2025-05-14 15:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:00.120473 | orchestrator | 2025-05-14 15:30:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:00.120602 | orchestrator | 2025-05-14 15:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:03.174502 | orchestrator | 2025-05-14 15:30:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:03.174614 | orchestrator | 2025-05-14 15:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:06.238166 | orchestrator | 2025-05-14 15:30:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:06.238267 | orchestrator | 2025-05-14 15:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:09.287647 | orchestrator | 2025-05-14 15:30:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:09.287737 | orchestrator | 2025-05-14 15:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:12.335756 | orchestrator | 2025-05-14 15:30:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:12.335857 | orchestrator | 2025-05-14 15:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:15.386772 | orchestrator | 2025-05-14 15:30:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:15.386941 | orchestrator | 2025-05-14 15:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:18.434294 | orchestrator | 2025-05-14 15:30:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:18.434400 | orchestrator | 2025-05-14 15:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:21.490751 | orchestrator | 2025-05-14 15:30:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:21.490860 | orchestrator | 2025-05-14 15:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:24.552051 | orchestrator | 2025-05-14 15:30:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:24.552164 | orchestrator | 2025-05-14 15:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:27.620402 | orchestrator | 2025-05-14 15:30:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:27.620543 | orchestrator | 2025-05-14 15:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:30.660774 | orchestrator | 2025-05-14 15:30:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:30.660863 | orchestrator | 2025-05-14 15:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:33.708773 | orchestrator | 2025-05-14 15:30:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:33.708947 | orchestrator | 2025-05-14 15:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:36.760520 | orchestrator | 2025-05-14 15:30:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:36.760633 | orchestrator | 2025-05-14 15:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:39.806759 | orchestrator | 2025-05-14 15:30:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:39.806865 | orchestrator | 2025-05-14 15:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:42.871610 | orchestrator | 2025-05-14 15:30:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:42.871722 | orchestrator | 2025-05-14 15:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:45.919959 | orchestrator | 2025-05-14 15:30:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:45.920070 | orchestrator | 2025-05-14 15:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:48.965010 | orchestrator | 2025-05-14 15:30:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:48.965122 | orchestrator | 2025-05-14 15:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:52.019425 | orchestrator | 2025-05-14 15:30:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:52.019530 | orchestrator | 2025-05-14 15:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:55.069193 | orchestrator | 2025-05-14 15:30:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:55.069363 | orchestrator | 2025-05-14 15:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:30:58.122313 | orchestrator | 2025-05-14 15:30:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:30:58.122422 | orchestrator | 2025-05-14 15:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:01.162815 | orchestrator | 2025-05-14 15:31:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:01.162998 | orchestrator | 2025-05-14 15:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:04.218277 | orchestrator | 2025-05-14 15:31:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:04.218408 | orchestrator | 2025-05-14 15:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:07.269432 | orchestrator | 2025-05-14 15:31:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:07.269526 | orchestrator | 2025-05-14 15:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:10.308697 | orchestrator | 2025-05-14 15:31:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:10.308760 | orchestrator | 2025-05-14 15:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:13.358572 | orchestrator | 2025-05-14 15:31:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:13.358710 | orchestrator | 2025-05-14 15:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:16.409182 | orchestrator | 2025-05-14 15:31:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:16.409318 | orchestrator | 2025-05-14 15:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:19.460378 | orchestrator | 2025-05-14 15:31:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:19.460481 | orchestrator | 2025-05-14 15:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:22.508741 | orchestrator | 2025-05-14 15:31:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:22.508891 | orchestrator | 2025-05-14 15:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:25.551088 | orchestrator | 2025-05-14 15:31:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:25.551174 | orchestrator | 2025-05-14 15:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:28.605315 | orchestrator | 2025-05-14 15:31:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:28.605408 | orchestrator | 2025-05-14 15:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:31.642457 | orchestrator | 2025-05-14 15:31:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:31.642579 | orchestrator | 2025-05-14 15:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:34.693989 | orchestrator | 2025-05-14 15:31:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:34.694125 | orchestrator | 2025-05-14 15:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:37.754794 | orchestrator | 2025-05-14 15:31:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:37.754993 | orchestrator | 2025-05-14 15:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:40.811934 | orchestrator | 2025-05-14 15:31:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:40.812046 | orchestrator | 2025-05-14 15:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:43.863080 | orchestrator | 2025-05-14 15:31:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:43.863182 | orchestrator | 2025-05-14 15:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:46.913592 | orchestrator | 2025-05-14 15:31:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:46.913686 | orchestrator | 2025-05-14 15:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:49.960810 | orchestrator | 2025-05-14 15:31:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:49.960959 | orchestrator | 2025-05-14 15:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:53.013973 | orchestrator | 2025-05-14 15:31:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:53.014128 | orchestrator | 2025-05-14 15:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:56.064172 | orchestrator | 2025-05-14 15:31:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:56.064290 | orchestrator | 2025-05-14 15:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:31:59.125025 | orchestrator | 2025-05-14 15:31:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:31:59.125137 | orchestrator | 2025-05-14 15:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:02.182643 | orchestrator | 2025-05-14 15:32:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:02.182781 | orchestrator | 2025-05-14 15:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:05.233986 | orchestrator | 2025-05-14 15:32:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:05.234089 | orchestrator | 2025-05-14 15:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:08.281381 | orchestrator | 2025-05-14 15:32:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:08.281473 | orchestrator | 2025-05-14 15:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:11.332949 | orchestrator | 2025-05-14 15:32:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:11.333053 | orchestrator | 2025-05-14 15:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:14.385924 | orchestrator | 2025-05-14 15:32:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:14.386087 | orchestrator | 2025-05-14 15:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:17.434297 | orchestrator | 2025-05-14 15:32:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:17.434407 | orchestrator | 2025-05-14 15:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:20.489440 | orchestrator | 2025-05-14 15:32:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:20.489607 | orchestrator | 2025-05-14 15:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:23.540763 | orchestrator | 2025-05-14 15:32:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:23.540986 | orchestrator | 2025-05-14 15:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:26.589068 | orchestrator | 2025-05-14 15:32:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:26.589127 | orchestrator | 2025-05-14 15:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:29.636512 | orchestrator | 2025-05-14 15:32:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:29.636649 | orchestrator | 2025-05-14 15:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:32.682282 | orchestrator | 2025-05-14 15:32:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:32.682374 | orchestrator | 2025-05-14 15:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:35.739724 | orchestrator | 2025-05-14 15:32:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:35.739862 | orchestrator | 2025-05-14 15:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:38.788335 | orchestrator | 2025-05-14 15:32:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:38.788448 | orchestrator | 2025-05-14 15:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:41.847435 | orchestrator | 2025-05-14 15:32:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:41.847552 | orchestrator | 2025-05-14 15:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:44.897900 | orchestrator | 2025-05-14 15:32:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:44.898075 | orchestrator | 2025-05-14 15:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:47.949436 | orchestrator | 2025-05-14 15:32:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:47.949550 | orchestrator | 2025-05-14 15:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:51.007540 | orchestrator | 2025-05-14 15:32:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:51.007641 | orchestrator | 2025-05-14 15:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:54.061457 | orchestrator | 2025-05-14 15:32:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:54.061567 | orchestrator | 2025-05-14 15:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:32:57.120061 | orchestrator | 2025-05-14 15:32:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:32:57.120165 | orchestrator | 2025-05-14 15:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:00.174267 | orchestrator | 2025-05-14 15:33:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:00.174404 | orchestrator | 2025-05-14 15:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:03.227534 | orchestrator | 2025-05-14 15:33:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:03.227624 | orchestrator | 2025-05-14 15:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:06.273679 | orchestrator | 2025-05-14 15:33:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:06.273862 | orchestrator | 2025-05-14 15:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:09.328240 | orchestrator | 2025-05-14 15:33:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:09.328350 | orchestrator | 2025-05-14 15:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:12.383289 | orchestrator | 2025-05-14 15:33:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:12.383399 | orchestrator | 2025-05-14 15:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:15.429597 | orchestrator | 2025-05-14 15:33:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:15.429937 | orchestrator | 2025-05-14 15:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:18.482417 | orchestrator | 2025-05-14 15:33:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:18.482533 | orchestrator | 2025-05-14 15:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:21.535352 | orchestrator | 2025-05-14 15:33:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:21.535600 | orchestrator | 2025-05-14 15:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:24.580186 | orchestrator | 2025-05-14 15:33:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:24.580268 | orchestrator | 2025-05-14 15:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:27.631711 | orchestrator | 2025-05-14 15:33:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:27.631874 | orchestrator | 2025-05-14 15:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:30.683822 | orchestrator | 2025-05-14 15:33:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:30.683932 | orchestrator | 2025-05-14 15:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:33.727171 | orchestrator | 2025-05-14 15:33:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:33.727280 | orchestrator | 2025-05-14 15:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:36.779238 | orchestrator | 2025-05-14 15:33:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:36.779447 | orchestrator | 2025-05-14 15:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:39.836787 | orchestrator | 2025-05-14 15:33:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:39.836903 | orchestrator | 2025-05-14 15:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:42.895471 | orchestrator | 2025-05-14 15:33:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:42.895578 | orchestrator | 2025-05-14 15:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:45.941133 | orchestrator | 2025-05-14 15:33:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:45.941242 | orchestrator | 2025-05-14 15:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:48.990627 | orchestrator | 2025-05-14 15:33:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:48.990818 | orchestrator | 2025-05-14 15:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:52.038821 | orchestrator | 2025-05-14 15:33:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:52.038936 | orchestrator | 2025-05-14 15:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:55.086756 | orchestrator | 2025-05-14 15:33:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:55.086887 | orchestrator | 2025-05-14 15:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:33:58.141559 | orchestrator | 2025-05-14 15:33:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:33:58.141675 | orchestrator | 2025-05-14 15:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:01.196426 | orchestrator | 2025-05-14 15:34:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:01.196562 | orchestrator | 2025-05-14 15:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:04.252155 | orchestrator | 2025-05-14 15:34:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:04.252290 | orchestrator | 2025-05-14 15:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:07.298375 | orchestrator | 2025-05-14 15:34:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:07.298528 | orchestrator | 2025-05-14 15:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:10.352913 | orchestrator | 2025-05-14 15:34:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:10.353051 | orchestrator | 2025-05-14 15:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:13.411134 | orchestrator | 2025-05-14 15:34:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:13.411260 | orchestrator | 2025-05-14 15:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:16.475458 | orchestrator | 2025-05-14 15:34:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:16.475592 | orchestrator | 2025-05-14 15:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:19.525630 | orchestrator | 2025-05-14 15:34:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:19.525795 | orchestrator | 2025-05-14 15:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:22.582760 | orchestrator | 2025-05-14 15:34:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:22.582994 | orchestrator | 2025-05-14 15:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:25.632270 | orchestrator | 2025-05-14 15:34:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:25.632347 | orchestrator | 2025-05-14 15:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:28.677886 | orchestrator | 2025-05-14 15:34:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:28.678009 | orchestrator | 2025-05-14 15:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:31.722382 | orchestrator | 2025-05-14 15:34:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:31.722569 | orchestrator | 2025-05-14 15:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:34.771498 | orchestrator | 2025-05-14 15:34:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:34.771609 | orchestrator | 2025-05-14 15:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:37.822292 | orchestrator | 2025-05-14 15:34:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:37.822388 | orchestrator | 2025-05-14 15:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:40.871940 | orchestrator | 2025-05-14 15:34:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:40.872036 | orchestrator | 2025-05-14 15:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:43.918237 | orchestrator | 2025-05-14 15:34:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:43.918334 | orchestrator | 2025-05-14 15:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:46.969247 | orchestrator | 2025-05-14 15:34:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:46.969362 | orchestrator | 2025-05-14 15:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:50.024628 | orchestrator | 2025-05-14 15:34:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:50.024789 | orchestrator | 2025-05-14 15:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:53.075605 | orchestrator | 2025-05-14 15:34:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:53.075789 | orchestrator | 2025-05-14 15:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:56.126642 | orchestrator | 2025-05-14 15:34:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:56.126826 | orchestrator | 2025-05-14 15:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:34:59.181139 | orchestrator | 2025-05-14 15:34:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:34:59.181251 | orchestrator | 2025-05-14 15:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:02.234984 | orchestrator | 2025-05-14 15:35:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:02.235135 | orchestrator | 2025-05-14 15:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:05.284184 | orchestrator | 2025-05-14 15:35:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:05.284282 | orchestrator | 2025-05-14 15:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:08.337711 | orchestrator | 2025-05-14 15:35:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:08.337818 | orchestrator | 2025-05-14 15:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:11.395861 | orchestrator | 2025-05-14 15:35:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:11.395979 | orchestrator | 2025-05-14 15:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:14.451087 | orchestrator | 2025-05-14 15:35:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:14.451190 | orchestrator | 2025-05-14 15:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:17.502493 | orchestrator | 2025-05-14 15:35:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:17.502627 | orchestrator | 2025-05-14 15:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:20.549703 | orchestrator | 2025-05-14 15:35:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:20.549813 | orchestrator | 2025-05-14 15:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:23.600030 | orchestrator | 2025-05-14 15:35:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:23.600145 | orchestrator | 2025-05-14 15:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:26.650871 | orchestrator | 2025-05-14 15:35:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:26.650977 | orchestrator | 2025-05-14 15:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:29.703896 | orchestrator | 2025-05-14 15:35:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:29.704008 | orchestrator | 2025-05-14 15:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:32.758171 | orchestrator | 2025-05-14 15:35:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:32.758298 | orchestrator | 2025-05-14 15:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:35.811736 | orchestrator | 2025-05-14 15:35:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:35.811852 | orchestrator | 2025-05-14 15:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:38.863834 | orchestrator | 2025-05-14 15:35:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:38.863976 | orchestrator | 2025-05-14 15:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:41.910184 | orchestrator | 2025-05-14 15:35:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:41.910292 | orchestrator | 2025-05-14 15:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:44.961217 | orchestrator | 2025-05-14 15:35:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:44.961307 | orchestrator | 2025-05-14 15:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:48.020159 | orchestrator | 2025-05-14 15:35:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:48.020346 | orchestrator | 2025-05-14 15:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:51.077934 | orchestrator | 2025-05-14 15:35:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:51.078134 | orchestrator | 2025-05-14 15:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:54.131429 | orchestrator | 2025-05-14 15:35:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:54.131555 | orchestrator | 2025-05-14 15:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:35:57.180992 | orchestrator | 2025-05-14 15:35:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:35:57.181101 | orchestrator | 2025-05-14 15:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:00.230088 | orchestrator | 2025-05-14 15:36:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:00.230199 | orchestrator | 2025-05-14 15:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:03.288493 | orchestrator | 2025-05-14 15:36:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:03.288605 | orchestrator | 2025-05-14 15:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:06.334327 | orchestrator | 2025-05-14 15:36:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:06.334465 | orchestrator | 2025-05-14 15:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:09.386411 | orchestrator | 2025-05-14 15:36:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:09.386519 | orchestrator | 2025-05-14 15:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:12.442426 | orchestrator | 2025-05-14 15:36:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:12.442544 | orchestrator | 2025-05-14 15:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:15.498720 | orchestrator | 2025-05-14 15:36:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:15.498832 | orchestrator | 2025-05-14 15:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:18.548229 | orchestrator | 2025-05-14 15:36:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:18.548323 | orchestrator | 2025-05-14 15:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:21.592482 | orchestrator | 2025-05-14 15:36:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:21.592584 | orchestrator | 2025-05-14 15:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:24.643580 | orchestrator | 2025-05-14 15:36:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:24.643768 | orchestrator | 2025-05-14 15:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:27.698301 | orchestrator | 2025-05-14 15:36:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:27.698434 | orchestrator | 2025-05-14 15:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:30.744232 | orchestrator | 2025-05-14 15:36:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:30.744323 | orchestrator | 2025-05-14 15:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:33.797857 | orchestrator | 2025-05-14 15:36:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:33.797968 | orchestrator | 2025-05-14 15:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:36.843790 | orchestrator | 2025-05-14 15:36:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:36.843896 | orchestrator | 2025-05-14 15:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:39.896851 | orchestrator | 2025-05-14 15:36:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:39.896964 | orchestrator | 2025-05-14 15:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:42.940063 | orchestrator | 2025-05-14 15:36:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:42.940166 | orchestrator | 2025-05-14 15:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:45.986977 | orchestrator | 2025-05-14 15:36:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:45.987092 | orchestrator | 2025-05-14 15:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:49.048013 | orchestrator | 2025-05-14 15:36:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:49.048125 | orchestrator | 2025-05-14 15:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:52.094130 | orchestrator | 2025-05-14 15:36:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:52.094238 | orchestrator | 2025-05-14 15:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:55.146748 | orchestrator | 2025-05-14 15:36:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:55.146860 | orchestrator | 2025-05-14 15:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:36:58.198138 | orchestrator | 2025-05-14 15:36:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:36:58.198236 | orchestrator | 2025-05-14 15:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:01.248579 | orchestrator | 2025-05-14 15:37:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:01.248725 | orchestrator | 2025-05-14 15:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:04.290195 | orchestrator | 2025-05-14 15:37:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:04.290307 | orchestrator | 2025-05-14 15:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:07.340908 | orchestrator | 2025-05-14 15:37:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:07.341019 | orchestrator | 2025-05-14 15:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:10.402116 | orchestrator | 2025-05-14 15:37:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:10.402255 | orchestrator | 2025-05-14 15:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:13.451003 | orchestrator | 2025-05-14 15:37:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:13.451116 | orchestrator | 2025-05-14 15:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:16.498251 | orchestrator | 2025-05-14 15:37:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:16.498328 | orchestrator | 2025-05-14 15:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:19.545847 | orchestrator | 2025-05-14 15:37:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:19.545960 | orchestrator | 2025-05-14 15:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:22.597266 | orchestrator | 2025-05-14 15:37:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:22.597372 | orchestrator | 2025-05-14 15:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:25.650743 | orchestrator | 2025-05-14 15:37:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:25.650854 | orchestrator | 2025-05-14 15:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:28.701505 | orchestrator | 2025-05-14 15:37:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:28.701680 | orchestrator | 2025-05-14 15:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:31.754233 | orchestrator | 2025-05-14 15:37:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:31.754912 | orchestrator | 2025-05-14 15:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:34.807852 | orchestrator | 2025-05-14 15:37:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:34.807969 | orchestrator | 2025-05-14 15:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:37.852565 | orchestrator | 2025-05-14 15:37:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:37.852700 | orchestrator | 2025-05-14 15:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:40.908131 | orchestrator | 2025-05-14 15:37:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:40.908237 | orchestrator | 2025-05-14 15:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:43.965023 | orchestrator | 2025-05-14 15:37:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:43.965142 | orchestrator | 2025-05-14 15:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:47.011459 | orchestrator | 2025-05-14 15:37:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:47.011687 | orchestrator | 2025-05-14 15:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:50.055988 | orchestrator | 2025-05-14 15:37:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:50.056110 | orchestrator | 2025-05-14 15:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:53.108265 | orchestrator | 2025-05-14 15:37:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:53.108401 | orchestrator | 2025-05-14 15:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:56.164412 | orchestrator | 2025-05-14 15:37:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:56.164529 | orchestrator | 2025-05-14 15:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:37:59.215695 | orchestrator | 2025-05-14 15:37:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:37:59.215793 | orchestrator | 2025-05-14 15:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:02.275016 | orchestrator | 2025-05-14 15:38:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:02.275127 | orchestrator | 2025-05-14 15:38:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:05.333172 | orchestrator | 2025-05-14 15:38:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:05.333294 | orchestrator | 2025-05-14 15:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:08.385744 | orchestrator | 2025-05-14 15:38:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:08.385853 | orchestrator | 2025-05-14 15:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:11.452090 | orchestrator | 2025-05-14 15:38:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:11.452198 | orchestrator | 2025-05-14 15:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:14.510378 | orchestrator | 2025-05-14 15:38:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:14.510492 | orchestrator | 2025-05-14 15:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:17.553490 | orchestrator | 2025-05-14 15:38:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:17.553549 | orchestrator | 2025-05-14 15:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:20.605845 | orchestrator | 2025-05-14 15:38:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:20.605954 | orchestrator | 2025-05-14 15:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:23.659163 | orchestrator | 2025-05-14 15:38:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:23.659274 | orchestrator | 2025-05-14 15:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:26.720373 | orchestrator | 2025-05-14 15:38:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:26.720501 | orchestrator | 2025-05-14 15:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:29.782949 | orchestrator | 2025-05-14 15:38:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:29.783072 | orchestrator | 2025-05-14 15:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:32.846710 | orchestrator | 2025-05-14 15:38:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:32.846824 | orchestrator | 2025-05-14 15:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:35.902192 | orchestrator | 2025-05-14 15:38:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:35.902289 | orchestrator | 2025-05-14 15:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:38.979548 | orchestrator | 2025-05-14 15:38:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:38.979703 | orchestrator | 2025-05-14 15:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:42.040256 | orchestrator | 2025-05-14 15:38:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:42.040402 | orchestrator | 2025-05-14 15:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:45.091720 | orchestrator | 2025-05-14 15:38:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:45.091827 | orchestrator | 2025-05-14 15:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:48.142212 | orchestrator | 2025-05-14 15:38:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:48.142358 | orchestrator | 2025-05-14 15:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:51.199404 | orchestrator | 2025-05-14 15:38:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:51.199533 | orchestrator | 2025-05-14 15:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:54.270501 | orchestrator | 2025-05-14 15:38:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:54.270679 | orchestrator | 2025-05-14 15:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:38:57.325981 | orchestrator | 2025-05-14 15:38:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:38:57.326146 | orchestrator | 2025-05-14 15:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:00.380245 | orchestrator | 2025-05-14 15:39:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:00.380357 | orchestrator | 2025-05-14 15:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:03.429220 | orchestrator | 2025-05-14 15:39:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:03.429333 | orchestrator | 2025-05-14 15:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:06.481900 | orchestrator | 2025-05-14 15:39:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:06.481997 | orchestrator | 2025-05-14 15:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:09.531720 | orchestrator | 2025-05-14 15:39:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:09.531809 | orchestrator | 2025-05-14 15:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:12.589826 | orchestrator | 2025-05-14 15:39:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:12.589939 | orchestrator | 2025-05-14 15:39:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:15.646165 | orchestrator | 2025-05-14 15:39:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:15.646272 | orchestrator | 2025-05-14 15:39:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:18.705765 | orchestrator | 2025-05-14 15:39:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:18.705889 | orchestrator | 2025-05-14 15:39:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:21.757225 | orchestrator | 2025-05-14 15:39:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:21.757314 | orchestrator | 2025-05-14 15:39:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:24.802499 | orchestrator | 2025-05-14 15:39:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:24.802684 | orchestrator | 2025-05-14 15:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:27.851882 | orchestrator | 2025-05-14 15:39:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:27.852055 | orchestrator | 2025-05-14 15:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:30.897068 | orchestrator | 2025-05-14 15:39:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:30.897202 | orchestrator | 2025-05-14 15:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:33.946385 | orchestrator | 2025-05-14 15:39:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:33.946537 | orchestrator | 2025-05-14 15:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:37.013866 | orchestrator | 2025-05-14 15:39:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:37.015892 | orchestrator | 2025-05-14 15:39:37 | INFO  | Task 7b615653-fca8-4143-b2aa-4908567059e2 is in state STARTED 2025-05-14 15:39:37.015933 | orchestrator | 2025-05-14 15:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:40.076848 | orchestrator | 2025-05-14 15:39:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:40.078422 | orchestrator | 2025-05-14 15:39:40 | INFO  | Task 7b615653-fca8-4143-b2aa-4908567059e2 is in state STARTED 2025-05-14 15:39:40.078453 | orchestrator | 2025-05-14 15:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:43.135970 | orchestrator | 2025-05-14 15:39:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:43.137285 | orchestrator | 2025-05-14 15:39:43 | INFO  | Task 7b615653-fca8-4143-b2aa-4908567059e2 is in state STARTED 2025-05-14 15:39:43.137317 | orchestrator | 2025-05-14 15:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:46.187720 | orchestrator | 2025-05-14 15:39:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:46.188167 | orchestrator | 2025-05-14 15:39:46 | INFO  | Task 7b615653-fca8-4143-b2aa-4908567059e2 is in state STARTED 2025-05-14 15:39:46.188189 | orchestrator | 2025-05-14 15:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:49.233718 | orchestrator | 2025-05-14 15:39:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:49.234010 | orchestrator | 2025-05-14 15:39:49 | INFO  | Task 7b615653-fca8-4143-b2aa-4908567059e2 is in state SUCCESS 2025-05-14 15:39:49.234191 | orchestrator | 2025-05-14 15:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:52.281955 | orchestrator | 2025-05-14 15:39:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:52.282142 | orchestrator | 2025-05-14 15:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:55.332711 | orchestrator | 2025-05-14 15:39:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:55.332849 | orchestrator | 2025-05-14 15:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:39:58.391717 | orchestrator | 2025-05-14 15:39:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:39:58.391832 | orchestrator | 2025-05-14 15:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:01.438248 | orchestrator | 2025-05-14 15:40:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:01.438360 | orchestrator | 2025-05-14 15:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:04.488074 | orchestrator | 2025-05-14 15:40:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:04.488212 | orchestrator | 2025-05-14 15:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:07.534379 | orchestrator | 2025-05-14 15:40:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:07.534488 | orchestrator | 2025-05-14 15:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:10.582437 | orchestrator | 2025-05-14 15:40:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:10.582523 | orchestrator | 2025-05-14 15:40:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:13.632056 | orchestrator | 2025-05-14 15:40:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:13.632132 | orchestrator | 2025-05-14 15:40:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:16.689870 | orchestrator | 2025-05-14 15:40:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:16.690005 | orchestrator | 2025-05-14 15:40:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:19.743766 | orchestrator | 2025-05-14 15:40:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:19.743902 | orchestrator | 2025-05-14 15:40:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:22.791545 | orchestrator | 2025-05-14 15:40:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:22.791709 | orchestrator | 2025-05-14 15:40:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:25.841655 | orchestrator | 2025-05-14 15:40:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:25.841781 | orchestrator | 2025-05-14 15:40:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:28.893548 | orchestrator | 2025-05-14 15:40:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:28.893668 | orchestrator | 2025-05-14 15:40:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:31.947085 | orchestrator | 2025-05-14 15:40:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:31.947202 | orchestrator | 2025-05-14 15:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:34.992342 | orchestrator | 2025-05-14 15:40:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:34.992450 | orchestrator | 2025-05-14 15:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:38.044523 | orchestrator | 2025-05-14 15:40:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:38.044668 | orchestrator | 2025-05-14 15:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:41.096259 | orchestrator | 2025-05-14 15:40:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:41.096349 | orchestrator | 2025-05-14 15:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:44.150873 | orchestrator | 2025-05-14 15:40:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:44.150965 | orchestrator | 2025-05-14 15:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:47.200550 | orchestrator | 2025-05-14 15:40:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:47.200659 | orchestrator | 2025-05-14 15:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:50.256153 | orchestrator | 2025-05-14 15:40:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:50.256296 | orchestrator | 2025-05-14 15:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:53.310795 | orchestrator | 2025-05-14 15:40:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:53.310898 | orchestrator | 2025-05-14 15:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:56.361139 | orchestrator | 2025-05-14 15:40:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:56.361274 | orchestrator | 2025-05-14 15:40:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:40:59.420803 | orchestrator | 2025-05-14 15:40:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:40:59.420915 | orchestrator | 2025-05-14 15:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:02.476519 | orchestrator | 2025-05-14 15:41:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:02.476652 | orchestrator | 2025-05-14 15:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:05.529457 | orchestrator | 2025-05-14 15:41:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:05.531912 | orchestrator | 2025-05-14 15:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:08.576670 | orchestrator | 2025-05-14 15:41:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:08.576780 | orchestrator | 2025-05-14 15:41:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:11.629954 | orchestrator | 2025-05-14 15:41:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:11.630124 | orchestrator | 2025-05-14 15:41:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:14.691519 | orchestrator | 2025-05-14 15:41:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:14.691721 | orchestrator | 2025-05-14 15:41:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:17.742412 | orchestrator | 2025-05-14 15:41:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:17.742517 | orchestrator | 2025-05-14 15:41:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:20.794829 | orchestrator | 2025-05-14 15:41:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:20.794945 | orchestrator | 2025-05-14 15:41:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:23.838759 | orchestrator | 2025-05-14 15:41:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:23.838859 | orchestrator | 2025-05-14 15:41:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:26.890667 | orchestrator | 2025-05-14 15:41:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:26.890781 | orchestrator | 2025-05-14 15:41:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:29.940038 | orchestrator | 2025-05-14 15:41:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:29.940147 | orchestrator | 2025-05-14 15:41:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:32.985267 | orchestrator | 2025-05-14 15:41:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:32.985410 | orchestrator | 2025-05-14 15:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:36.024685 | orchestrator | 2025-05-14 15:41:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:36.024820 | orchestrator | 2025-05-14 15:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:39.071640 | orchestrator | 2025-05-14 15:41:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:39.071776 | orchestrator | 2025-05-14 15:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:42.127233 | orchestrator | 2025-05-14 15:41:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:42.129740 | orchestrator | 2025-05-14 15:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:45.177954 | orchestrator | 2025-05-14 15:41:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:45.178181 | orchestrator | 2025-05-14 15:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:48.218630 | orchestrator | 2025-05-14 15:41:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:48.218760 | orchestrator | 2025-05-14 15:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:51.266249 | orchestrator | 2025-05-14 15:41:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:51.266389 | orchestrator | 2025-05-14 15:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:54.306923 | orchestrator | 2025-05-14 15:41:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:54.307023 | orchestrator | 2025-05-14 15:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:41:57.352175 | orchestrator | 2025-05-14 15:41:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:41:57.352279 | orchestrator | 2025-05-14 15:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:00.404941 | orchestrator | 2025-05-14 15:42:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:00.405056 | orchestrator | 2025-05-14 15:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:03.446635 | orchestrator | 2025-05-14 15:42:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:03.446766 | orchestrator | 2025-05-14 15:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:06.487665 | orchestrator | 2025-05-14 15:42:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:06.487810 | orchestrator | 2025-05-14 15:42:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:09.538268 | orchestrator | 2025-05-14 15:42:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:09.538406 | orchestrator | 2025-05-14 15:42:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:12.585293 | orchestrator | 2025-05-14 15:42:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:12.585398 | orchestrator | 2025-05-14 15:42:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:15.638481 | orchestrator | 2025-05-14 15:42:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:15.638616 | orchestrator | 2025-05-14 15:42:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:18.688765 | orchestrator | 2025-05-14 15:42:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:18.688893 | orchestrator | 2025-05-14 15:42:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:21.736061 | orchestrator | 2025-05-14 15:42:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:21.736213 | orchestrator | 2025-05-14 15:42:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:24.785973 | orchestrator | 2025-05-14 15:42:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:24.786139 | orchestrator | 2025-05-14 15:42:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:27.834879 | orchestrator | 2025-05-14 15:42:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:27.835001 | orchestrator | 2025-05-14 15:42:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:30.880715 | orchestrator | 2025-05-14 15:42:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:30.880803 | orchestrator | 2025-05-14 15:42:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:33.921931 | orchestrator | 2025-05-14 15:42:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:33.922076 | orchestrator | 2025-05-14 15:42:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:36.969679 | orchestrator | 2025-05-14 15:42:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:36.969795 | orchestrator | 2025-05-14 15:42:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:40.027694 | orchestrator | 2025-05-14 15:42:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:40.027873 | orchestrator | 2025-05-14 15:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:43.068858 | orchestrator | 2025-05-14 15:42:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:43.068963 | orchestrator | 2025-05-14 15:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:46.119325 | orchestrator | 2025-05-14 15:42:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:46.119434 | orchestrator | 2025-05-14 15:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:49.165366 | orchestrator | 2025-05-14 15:42:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:49.165437 | orchestrator | 2025-05-14 15:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:52.213683 | orchestrator | 2025-05-14 15:42:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:52.213810 | orchestrator | 2025-05-14 15:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:55.262966 | orchestrator | 2025-05-14 15:42:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:55.263072 | orchestrator | 2025-05-14 15:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:42:58.308918 | orchestrator | 2025-05-14 15:42:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:42:58.309030 | orchestrator | 2025-05-14 15:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:01.357949 | orchestrator | 2025-05-14 15:43:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:01.358152 | orchestrator | 2025-05-14 15:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:04.404563 | orchestrator | 2025-05-14 15:43:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:04.404732 | orchestrator | 2025-05-14 15:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:07.448689 | orchestrator | 2025-05-14 15:43:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:07.448843 | orchestrator | 2025-05-14 15:43:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:10.498321 | orchestrator | 2025-05-14 15:43:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:10.498471 | orchestrator | 2025-05-14 15:43:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:13.543472 | orchestrator | 2025-05-14 15:43:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:13.543616 | orchestrator | 2025-05-14 15:43:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:16.601271 | orchestrator | 2025-05-14 15:43:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:16.601382 | orchestrator | 2025-05-14 15:43:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:19.652615 | orchestrator | 2025-05-14 15:43:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:19.652722 | orchestrator | 2025-05-14 15:43:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:22.702867 | orchestrator | 2025-05-14 15:43:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:22.702977 | orchestrator | 2025-05-14 15:43:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:25.760679 | orchestrator | 2025-05-14 15:43:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:25.760787 | orchestrator | 2025-05-14 15:43:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:28.812735 | orchestrator | 2025-05-14 15:43:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:28.812847 | orchestrator | 2025-05-14 15:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:31.862398 | orchestrator | 2025-05-14 15:43:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:31.862504 | orchestrator | 2025-05-14 15:43:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:34.910173 | orchestrator | 2025-05-14 15:43:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:34.910327 | orchestrator | 2025-05-14 15:43:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:37.965104 | orchestrator | 2025-05-14 15:43:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:37.965320 | orchestrator | 2025-05-14 15:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:41.015622 | orchestrator | 2025-05-14 15:43:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:41.015720 | orchestrator | 2025-05-14 15:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:44.056646 | orchestrator | 2025-05-14 15:43:44 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:44.056760 | orchestrator | 2025-05-14 15:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:47.102989 | orchestrator | 2025-05-14 15:43:47 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:47.103106 | orchestrator | 2025-05-14 15:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:50.145438 | orchestrator | 2025-05-14 15:43:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:50.145600 | orchestrator | 2025-05-14 15:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:53.196937 | orchestrator | 2025-05-14 15:43:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:53.197069 | orchestrator | 2025-05-14 15:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:56.251615 | orchestrator | 2025-05-14 15:43:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:56.251725 | orchestrator | 2025-05-14 15:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:43:59.303780 | orchestrator | 2025-05-14 15:43:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:43:59.303918 | orchestrator | 2025-05-14 15:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:02.352199 | orchestrator | 2025-05-14 15:44:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:02.352330 | orchestrator | 2025-05-14 15:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:05.406626 | orchestrator | 2025-05-14 15:44:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:05.406743 | orchestrator | 2025-05-14 15:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:08.456891 | orchestrator | 2025-05-14 15:44:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:08.456994 | orchestrator | 2025-05-14 15:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:11.504973 | orchestrator | 2025-05-14 15:44:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:11.505074 | orchestrator | 2025-05-14 15:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:14.557393 | orchestrator | 2025-05-14 15:44:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:14.557501 | orchestrator | 2025-05-14 15:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:17.606443 | orchestrator | 2025-05-14 15:44:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:17.606738 | orchestrator | 2025-05-14 15:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:20.654800 | orchestrator | 2025-05-14 15:44:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:20.654913 | orchestrator | 2025-05-14 15:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:23.704126 | orchestrator | 2025-05-14 15:44:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:23.704264 | orchestrator | 2025-05-14 15:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:26.750479 | orchestrator | 2025-05-14 15:44:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:26.750645 | orchestrator | 2025-05-14 15:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:29.793634 | orchestrator | 2025-05-14 15:44:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:29.793764 | orchestrator | 2025-05-14 15:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:32.842110 | orchestrator | 2025-05-14 15:44:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:32.842240 | orchestrator | 2025-05-14 15:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:35.895137 | orchestrator | 2025-05-14 15:44:35 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:35.895257 | orchestrator | 2025-05-14 15:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:38.941123 | orchestrator | 2025-05-14 15:44:38 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:38.941232 | orchestrator | 2025-05-14 15:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:41.996027 | orchestrator | 2025-05-14 15:44:41 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:41.996163 | orchestrator | 2025-05-14 15:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:45.045973 | orchestrator | 2025-05-14 15:44:45 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:45.046151 | orchestrator | 2025-05-14 15:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:48.091477 | orchestrator | 2025-05-14 15:44:48 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:48.091674 | orchestrator | 2025-05-14 15:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:51.140028 | orchestrator | 2025-05-14 15:44:51 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:51.140148 | orchestrator | 2025-05-14 15:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:54.187875 | orchestrator | 2025-05-14 15:44:54 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:54.188008 | orchestrator | 2025-05-14 15:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:44:57.233101 | orchestrator | 2025-05-14 15:44:57 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:44:57.233221 | orchestrator | 2025-05-14 15:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:00.277864 | orchestrator | 2025-05-14 15:45:00 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:00.278006 | orchestrator | 2025-05-14 15:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:03.323143 | orchestrator | 2025-05-14 15:45:03 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:03.323269 | orchestrator | 2025-05-14 15:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:06.377953 | orchestrator | 2025-05-14 15:45:06 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:06.378156 | orchestrator | 2025-05-14 15:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:09.422003 | orchestrator | 2025-05-14 15:45:09 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:09.422172 | orchestrator | 2025-05-14 15:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:12.467590 | orchestrator | 2025-05-14 15:45:12 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:12.467757 | orchestrator | 2025-05-14 15:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:15.513917 | orchestrator | 2025-05-14 15:45:15 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:15.514111 | orchestrator | 2025-05-14 15:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:18.572445 | orchestrator | 2025-05-14 15:45:18 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:18.572579 | orchestrator | 2025-05-14 15:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:21.632111 | orchestrator | 2025-05-14 15:45:21 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:21.632238 | orchestrator | 2025-05-14 15:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:24.680325 | orchestrator | 2025-05-14 15:45:24 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:24.680478 | orchestrator | 2025-05-14 15:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:27.727195 | orchestrator | 2025-05-14 15:45:27 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:27.727318 | orchestrator | 2025-05-14 15:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:30.768991 | orchestrator | 2025-05-14 15:45:30 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:30.769078 | orchestrator | 2025-05-14 15:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:33.812190 | orchestrator | 2025-05-14 15:45:33 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:33.812299 | orchestrator | 2025-05-14 15:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:36.864293 | orchestrator | 2025-05-14 15:45:36 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:36.864409 | orchestrator | 2025-05-14 15:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:39.914999 | orchestrator | 2025-05-14 15:45:39 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:39.915107 | orchestrator | 2025-05-14 15:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:42.963879 | orchestrator | 2025-05-14 15:45:42 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:42.963991 | orchestrator | 2025-05-14 15:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:46.015243 | orchestrator | 2025-05-14 15:45:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:46.015356 | orchestrator | 2025-05-14 15:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:49.060061 | orchestrator | 2025-05-14 15:45:49 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:49.060173 | orchestrator | 2025-05-14 15:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:52.110219 | orchestrator | 2025-05-14 15:45:52 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:52.110323 | orchestrator | 2025-05-14 15:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:55.154317 | orchestrator | 2025-05-14 15:45:55 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:55.154438 | orchestrator | 2025-05-14 15:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:45:58.200879 | orchestrator | 2025-05-14 15:45:58 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:45:58.200999 | orchestrator | 2025-05-14 15:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:01.256358 | orchestrator | 2025-05-14 15:46:01 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:01.256466 | orchestrator | 2025-05-14 15:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:04.317042 | orchestrator | 2025-05-14 15:46:04 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:04.317151 | orchestrator | 2025-05-14 15:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:07.365680 | orchestrator | 2025-05-14 15:46:07 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:07.365935 | orchestrator | 2025-05-14 15:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:10.408739 | orchestrator | 2025-05-14 15:46:10 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:10.408837 | orchestrator | 2025-05-14 15:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:13.456473 | orchestrator | 2025-05-14 15:46:13 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:13.456584 | orchestrator | 2025-05-14 15:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:16.506105 | orchestrator | 2025-05-14 15:46:16 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:16.506204 | orchestrator | 2025-05-14 15:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:19.556582 | orchestrator | 2025-05-14 15:46:19 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:19.556698 | orchestrator | 2025-05-14 15:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:22.606559 | orchestrator | 2025-05-14 15:46:22 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:22.606664 | orchestrator | 2025-05-14 15:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:25.654730 | orchestrator | 2025-05-14 15:46:25 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:25.654840 | orchestrator | 2025-05-14 15:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:28.703450 | orchestrator | 2025-05-14 15:46:28 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:28.703574 | orchestrator | 2025-05-14 15:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:31.744995 | orchestrator | 2025-05-14 15:46:31 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:31.745128 | orchestrator | 2025-05-14 15:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:34.794800 | orchestrator | 2025-05-14 15:46:34 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:34.794930 | orchestrator | 2025-05-14 15:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:37.852251 | orchestrator | 2025-05-14 15:46:37 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:37.852366 | orchestrator | 2025-05-14 15:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:40.905387 | orchestrator | 2025-05-14 15:46:40 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:40.905492 | orchestrator | 2025-05-14 15:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:43.960345 | orchestrator | 2025-05-14 15:46:43 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:43.960457 | orchestrator | 2025-05-14 15:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:46.998873 | orchestrator | 2025-05-14 15:46:46 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:46.998959 | orchestrator | 2025-05-14 15:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:50.042166 | orchestrator | 2025-05-14 15:46:50 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:50.042278 | orchestrator | 2025-05-14 15:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:53.083224 | orchestrator | 2025-05-14 15:46:53 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:53.083333 | orchestrator | 2025-05-14 15:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:56.130767 | orchestrator | 2025-05-14 15:46:56 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:56.130879 | orchestrator | 2025-05-14 15:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:46:59.178626 | orchestrator | 2025-05-14 15:46:59 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:46:59.178734 | orchestrator | 2025-05-14 15:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:02.232828 | orchestrator | 2025-05-14 15:47:02 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:02.232939 | orchestrator | 2025-05-14 15:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:05.282759 | orchestrator | 2025-05-14 15:47:05 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:05.282853 | orchestrator | 2025-05-14 15:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:08.336175 | orchestrator | 2025-05-14 15:47:08 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:08.336289 | orchestrator | 2025-05-14 15:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:11.390830 | orchestrator | 2025-05-14 15:47:11 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:11.390932 | orchestrator | 2025-05-14 15:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:14.441935 | orchestrator | 2025-05-14 15:47:14 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:14.442191 | orchestrator | 2025-05-14 15:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:17.497944 | orchestrator | 2025-05-14 15:47:17 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:17.498145 | orchestrator | 2025-05-14 15:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:20.544540 | orchestrator | 2025-05-14 15:47:20 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:20.544655 | orchestrator | 2025-05-14 15:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:23.592591 | orchestrator | 2025-05-14 15:47:23 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:23.592688 | orchestrator | 2025-05-14 15:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:26.644841 | orchestrator | 2025-05-14 15:47:26 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:26.644926 | orchestrator | 2025-05-14 15:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:29.688678 | orchestrator | 2025-05-14 15:47:29 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:29.688830 | orchestrator | 2025-05-14 15:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:32.725231 | orchestrator | 2025-05-14 15:47:32 | INFO  | Task d9a132ba-c3f3-4689-8cc4-d8cc13ca3c9f is in state STARTED 2025-05-14 15:47:32.725325 | orchestrator | 2025-05-14 15:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 15:47:35.299939 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 15:47:35.301573 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 15:47:36.102218 | 2025-05-14 15:47:36.102379 | PLAY [Post output play] 2025-05-14 15:47:36.119894 | 2025-05-14 15:47:36.120037 | LOOP [stage-output : Register sources] 2025-05-14 15:47:36.191296 | 2025-05-14 15:47:36.191593 | TASK [stage-output : Check sudo] 2025-05-14 15:47:37.052940 | orchestrator | sudo: a password is required 2025-05-14 15:47:37.228925 | orchestrator | ok: Runtime: 0:00:00.016778 2025-05-14 15:47:37.241759 | 2025-05-14 15:47:37.241926 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-14 15:47:37.276857 | 2025-05-14 15:47:37.277060 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-14 15:47:37.360775 | orchestrator | ok 2025-05-14 15:47:37.370441 | 2025-05-14 15:47:37.370604 | LOOP [stage-output : Ensure target folders exist] 2025-05-14 15:47:37.829473 | orchestrator | ok: "docs" 2025-05-14 15:47:37.829816 | 2025-05-14 15:47:38.075566 | orchestrator | ok: "artifacts" 2025-05-14 15:47:38.324115 | orchestrator | ok: "logs" 2025-05-14 15:47:38.341094 | 2025-05-14 15:47:38.341235 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-14 15:47:38.375228 | 2025-05-14 15:47:38.375476 | TASK [stage-output : Make all log files readable] 2025-05-14 15:47:38.659344 | orchestrator | ok 2025-05-14 15:47:38.665726 | 2025-05-14 15:47:38.665857 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-14 15:47:38.700232 | orchestrator | skipping: Conditional result was False 2025-05-14 15:47:38.716543 | 2025-05-14 15:47:38.716716 | TASK [stage-output : Discover log files for compression] 2025-05-14 15:47:38.741834 | orchestrator | skipping: Conditional result was False 2025-05-14 15:47:38.754509 | 2025-05-14 15:47:38.754691 | LOOP [stage-output : Archive everything from logs] 2025-05-14 15:47:38.815520 | 2025-05-14 15:47:38.815755 | PLAY [Post cleanup play] 2025-05-14 15:47:38.826958 | 2025-05-14 15:47:38.827098 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 15:47:38.867758 | orchestrator | ok 2025-05-14 15:47:38.876602 | 2025-05-14 15:47:38.876719 | TASK [Set cloud fact (local deployment)] 2025-05-14 15:47:38.910500 | orchestrator | skipping: Conditional result was False 2025-05-14 15:47:38.918884 | 2025-05-14 15:47:38.919010 | TASK [Clean the cloud environment] 2025-05-14 15:47:41.723166 | orchestrator | 2025-05-14 15:47:41 - clean up servers 2025-05-14 15:47:42.596038 | orchestrator | 2025-05-14 15:47:42 - testbed-manager 2025-05-14 15:47:43.715550 | orchestrator | 2025-05-14 15:47:43 - testbed-node-4 2025-05-14 15:47:43.808992 | orchestrator | 2025-05-14 15:47:43 - testbed-node-0 2025-05-14 15:47:43.893840 | orchestrator | 2025-05-14 15:47:43 - testbed-node-2 2025-05-14 15:47:43.983579 | orchestrator | 2025-05-14 15:47:43 - testbed-node-5 2025-05-14 15:47:44.069337 | orchestrator | 2025-05-14 15:47:44 - testbed-node-3 2025-05-14 15:47:44.167962 | orchestrator | 2025-05-14 15:47:44 - testbed-node-1 2025-05-14 15:47:44.254719 | orchestrator | 2025-05-14 15:47:44 - clean up keypairs 2025-05-14 15:47:44.275506 | orchestrator | 2025-05-14 15:47:44 - testbed 2025-05-14 15:47:44.302787 | orchestrator | 2025-05-14 15:47:44 - wait for servers to be gone 2025-05-14 15:47:51.183734 | orchestrator | 2025-05-14 15:47:51 - clean up ports 2025-05-14 15:47:51.419225 | orchestrator | 2025-05-14 15:47:51 - 06e94f6f-74eb-49d2-8833-0aef71bc1e1d 2025-05-14 15:47:51.639202 | orchestrator | 2025-05-14 15:47:51 - 1dc0e347-5ef9-44ef-aa29-42701dc9cbdc 2025-05-14 15:47:51.823135 | orchestrator | 2025-05-14 15:47:51 - 3121acda-0209-4c96-a4c7-f833102098e0 2025-05-14 15:47:53.115871 | orchestrator | 2025-05-14 15:47:53 - 3160603b-26c0-4eeb-8e26-f8b8b6688fa2 2025-05-14 15:47:53.335925 | orchestrator | 2025-05-14 15:47:53 - 577b0f7e-50e4-4080-9e74-2b4f598d74f6 2025-05-14 15:47:53.517757 | orchestrator | 2025-05-14 15:47:53 - 6d6368c8-9838-42eb-a019-2bae1ad2f1ed 2025-05-14 15:47:53.724875 | orchestrator | 2025-05-14 15:47:53 - 6f77814e-7b9d-4a16-adfd-169051e872c3 2025-05-14 15:47:53.950933 | orchestrator | 2025-05-14 15:47:53 - clean up volumes 2025-05-14 15:47:54.099537 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-manager-base 2025-05-14 15:47:54.141609 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-5-node-base 2025-05-14 15:47:54.185667 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-2-node-base 2025-05-14 15:47:54.225321 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-3-node-base 2025-05-14 15:47:54.268823 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-4-node-base 2025-05-14 15:47:54.309835 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-0-node-base 2025-05-14 15:47:54.355352 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-1-node-base 2025-05-14 15:47:54.398810 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-1-node-4 2025-05-14 15:47:54.445400 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-8-node-5 2025-05-14 15:47:54.489650 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-4-node-4 2025-05-14 15:47:54.530825 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-0-node-3 2025-05-14 15:47:54.573858 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-5-node-5 2025-05-14 15:47:54.620425 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-3-node-3 2025-05-14 15:47:54.671657 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-7-node-4 2025-05-14 15:47:54.713579 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-2-node-5 2025-05-14 15:47:54.756603 | orchestrator | 2025-05-14 15:47:54 - testbed-volume-6-node-3 2025-05-14 15:47:54.803053 | orchestrator | 2025-05-14 15:47:54 - disconnect routers 2025-05-14 15:47:55.680150 | orchestrator | 2025-05-14 15:47:55 - testbed 2025-05-14 15:47:58.493872 | orchestrator | 2025-05-14 15:47:58 - clean up subnets 2025-05-14 15:47:58.530122 | orchestrator | 2025-05-14 15:47:58 - subnet-testbed-management 2025-05-14 15:47:58.749391 | orchestrator | 2025-05-14 15:47:58 - clean up networks 2025-05-14 15:47:58.954085 | orchestrator | 2025-05-14 15:47:58 - net-testbed-management 2025-05-14 15:47:59.266670 | orchestrator | 2025-05-14 15:47:59 - clean up security groups 2025-05-14 15:47:59.302971 | orchestrator | 2025-05-14 15:47:59 - testbed-management 2025-05-14 15:47:59.389869 | orchestrator | 2025-05-14 15:47:59 - testbed-node 2025-05-14 15:47:59.490686 | orchestrator | 2025-05-14 15:47:59 - clean up floating ips 2025-05-14 15:47:59.526345 | orchestrator | 2025-05-14 15:47:59 - 81.163.192.165 2025-05-14 15:47:59.906341 | orchestrator | 2025-05-14 15:47:59 - clean up routers 2025-05-14 15:47:59.958323 | orchestrator | 2025-05-14 15:47:59 - testbed 2025-05-14 15:48:00.975583 | orchestrator | ok: Runtime: 0:00:21.574329 2025-05-14 15:48:00.980073 | 2025-05-14 15:48:00.980258 | PLAY RECAP 2025-05-14 15:48:00.980380 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-14 15:48:00.980433 | 2025-05-14 15:48:01.128229 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 15:48:01.131056 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 15:48:01.882949 | 2025-05-14 15:48:01.883121 | PLAY [Cleanup play] 2025-05-14 15:48:01.900015 | 2025-05-14 15:48:01.900154 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 15:48:01.959293 | orchestrator | ok 2025-05-14 15:48:01.969184 | 2025-05-14 15:48:01.969332 | TASK [Set cloud fact (local deployment)] 2025-05-14 15:48:01.993368 | orchestrator | skipping: Conditional result was False 2025-05-14 15:48:02.007244 | 2025-05-14 15:48:02.007382 | TASK [Clean the cloud environment] 2025-05-14 15:48:03.167519 | orchestrator | 2025-05-14 15:48:03 - clean up servers 2025-05-14 15:48:03.714653 | orchestrator | 2025-05-14 15:48:03 - clean up keypairs 2025-05-14 15:48:03.735068 | orchestrator | 2025-05-14 15:48:03 - wait for servers to be gone 2025-05-14 15:48:03.838205 | orchestrator | 2025-05-14 15:48:03 - clean up ports 2025-05-14 15:48:03.913450 | orchestrator | 2025-05-14 15:48:03 - clean up volumes 2025-05-14 15:48:04.010596 | orchestrator | 2025-05-14 15:48:04 - disconnect routers 2025-05-14 15:48:04.039011 | orchestrator | 2025-05-14 15:48:04 - clean up subnets 2025-05-14 15:48:04.059821 | orchestrator | 2025-05-14 15:48:04 - clean up networks 2025-05-14 15:48:05.021781 | orchestrator | 2025-05-14 15:48:05 - clean up security groups 2025-05-14 15:48:05.045081 | orchestrator | 2025-05-14 15:48:05 - clean up floating ips 2025-05-14 15:48:05.071911 | orchestrator | 2025-05-14 15:48:05 - clean up routers 2025-05-14 15:48:05.550559 | orchestrator | ok: Runtime: 0:00:02.278857 2025-05-14 15:48:05.554621 | 2025-05-14 15:48:05.554821 | PLAY RECAP 2025-05-14 15:48:05.555030 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 15:48:05.555099 | 2025-05-14 15:48:05.684907 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 15:48:05.687419 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 15:48:06.418462 | 2025-05-14 15:48:06.418618 | PLAY [Base post-fetch] 2025-05-14 15:48:06.434918 | 2025-05-14 15:48:06.435056 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-14 15:48:06.490556 | orchestrator | skipping: Conditional result was False 2025-05-14 15:48:06.505189 | 2025-05-14 15:48:06.505412 | TASK [fetch-output : Set log path for single node] 2025-05-14 15:48:06.552834 | orchestrator | ok 2025-05-14 15:48:06.562087 | 2025-05-14 15:48:06.562236 | LOOP [fetch-output : Ensure local output dirs] 2025-05-14 15:48:07.054541 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/logs" 2025-05-14 15:48:07.346356 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/artifacts" 2025-05-14 15:48:07.607748 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/94308d9cc51747de973250c0c0b71a8a/work/docs" 2025-05-14 15:48:07.633076 | 2025-05-14 15:48:07.633257 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-14 15:48:08.557594 | orchestrator | changed: .d..t...... ./ 2025-05-14 15:48:08.558005 | orchestrator | changed: All items complete 2025-05-14 15:48:08.558060 | 2025-05-14 15:48:09.298595 | orchestrator | changed: .d..t...... ./ 2025-05-14 15:48:10.033720 | orchestrator | changed: .d..t...... ./ 2025-05-14 15:48:10.062709 | 2025-05-14 15:48:10.063167 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-14 15:48:10.103605 | orchestrator | skipping: Conditional result was False 2025-05-14 15:48:10.115906 | orchestrator | skipping: Conditional result was False 2025-05-14 15:48:10.129524 | 2025-05-14 15:48:10.129620 | PLAY RECAP 2025-05-14 15:48:10.129681 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-14 15:48:10.129711 | 2025-05-14 15:48:10.250449 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 15:48:10.252933 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 15:48:10.991955 | 2025-05-14 15:48:10.992117 | PLAY [Base post] 2025-05-14 15:48:11.006728 | 2025-05-14 15:48:11.006914 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-14 15:48:12.003392 | orchestrator | changed 2025-05-14 15:48:12.013852 | 2025-05-14 15:48:12.013987 | PLAY RECAP 2025-05-14 15:48:12.014066 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-14 15:48:12.014142 | 2025-05-14 15:48:12.140597 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 15:48:12.141608 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-14 15:48:12.922083 | 2025-05-14 15:48:12.922252 | PLAY [Base post-logs] 2025-05-14 15:48:12.932975 | 2025-05-14 15:48:12.933115 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-14 15:48:13.470008 | localhost | changed 2025-05-14 15:48:13.487204 | 2025-05-14 15:48:13.487367 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-14 15:48:13.516180 | localhost | ok 2025-05-14 15:48:13.523461 | 2025-05-14 15:48:13.523626 | TASK [Set zuul-log-path fact] 2025-05-14 15:48:13.552346 | localhost | ok 2025-05-14 15:48:13.568075 | 2025-05-14 15:48:13.568211 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 15:48:13.606133 | localhost | ok 2025-05-14 15:48:13.613871 | 2025-05-14 15:48:13.614091 | TASK [upload-logs : Create log directories] 2025-05-14 15:48:14.124878 | localhost | changed 2025-05-14 15:48:14.130508 | 2025-05-14 15:48:14.130669 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-14 15:48:14.634659 | localhost -> localhost | ok: Runtime: 0:00:00.006863 2025-05-14 15:48:14.644044 | 2025-05-14 15:48:14.644228 | TASK [upload-logs : Upload logs to log server] 2025-05-14 15:48:15.212122 | localhost | Output suppressed because no_log was given 2025-05-14 15:48:15.216495 | 2025-05-14 15:48:15.216678 | LOOP [upload-logs : Compress console log and json output] 2025-05-14 15:48:15.265752 | localhost | skipping: Conditional result was False 2025-05-14 15:48:15.271608 | localhost | skipping: Conditional result was False 2025-05-14 15:48:15.279211 | 2025-05-14 15:48:15.279452 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-14 15:48:15.329178 | localhost | skipping: Conditional result was False 2025-05-14 15:48:15.329827 | 2025-05-14 15:48:15.333774 | localhost | skipping: Conditional result was False 2025-05-14 15:48:15.347119 | 2025-05-14 15:48:15.347387 | LOOP [upload-logs : Upload console log and json output]